Updates from: 02/04/2021 04:07:19
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/multi-factor-authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multi-factor-authentication.md
@@ -8,7 +8,7 @@
Previously updated : 12/10/2020 Last updated : 02/01/2021
@@ -37,9 +37,12 @@ This feature helps applications handle scenarios such as:
1. Select the user flow for which you want to enable MFA. For example, *B2C_1_signinsignup*. 1. Select **Properties**. 1. In the **Multifactor authentication** section, select the desired **MFA method**, and then under **MFA enforcement** select **Always on**, or **Conditional (Recommended)**.
- > [!NOTE]
- > If you select **Conditional (Recommended)**, you'll also need to [add a Conditional Access policy](conditional-access-identity-protection-setup.md#add-a-conditional-access-policy) and specify the apps you want the policy to apply to.
-1. Select Save. MFA is now enabled for this user flow.
+ > [!NOTE]
+ >
+ > - If you select **Conditional (Recommended)**, you'll also need to [add a Conditional Access policy](conditional-access-identity-protection-setup.md#add-a-conditional-access-policy) and specify the apps you want the policy to apply to.
+ > - Multi-factor authentication (MFA) is disabled by default for sign-up user flows. You can enable MFA in user flows with phone sign-up, but because a phone number is used as the primary identifier, email one-time passcode is the only option available for the second authentication factor.
+
+1. Select **Save**. MFA is now enabled for this user flow.
You can use **Run user flow** to verify the experience. Confirm the following scenario:
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/phone-authentication-user-flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/phone-authentication-user-flows.md
@@ -9,7 +9,7 @@
Previously updated : 10/29/2020 Last updated : 02/01/2021
@@ -85,7 +85,7 @@ Here's an example showing how to add phone sign-up to a new user flow.
9. Under **Social identity providers**, select any other identity providers you want to allow for this user flow. > [!NOTE]
- > Multi-factor authentication (MFA) is disabled by default. You can enable MFA for a phone sign-up user flow, but because a phone number is used as the primary identifier, email one-time passcode is the only option available for the second authentication factor.
+ > Multi-factor authentication (MFA) is disabled by default for sign-up user flows. You can enable MFA for a phone sign-up user flow, but because a phone number is used as the primary identifier, email one-time passcode is the only option available for the second authentication factor.
1. In the **User attributes and token claims** section, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/phone-based-mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/phone-based-mfa.md
@@ -0,0 +1,118 @@
+
+ Title: Securing phone-based MFA in Azure AD B2C
+
+description: Learn tips for securing phone-based multi-factor authentication (MFA) in your Azure AD B2C tenant by using Azure Monitor Log Analytics reports and alerts. Use our workbook to identify fraudulent phone authentications and mitigate fraudulent sign-ups. =
+++++++ Last updated : 02/01/2021++++
+# Securing phone-based multi-factor authentication (MFA)
++
+With Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA), users can choose to receive an automated voice call at a phone number they register for verification. Malicious users could take advantage of this method by creating multiple accounts and placing phone calls without completing the MFA registration process. These numerous failed sign-ups could exhaust the allowed sign-up attempts, preventing other users from signing up for new accounts in your Azure AD B2C tenant. To help protect against these attacks, you can use Azure Monitor to monitor phone authentication failures and mitigate fraudulent sign-ups.
+
+## Prerequisites
+
+Before you begin, create a [Log Analytics workspace](azure-monitor.md).
+
+## Create a phone-based MFA events workbook
+
+The [Azure AD B2C Reports & Alerts](https://github.com/azure-ad-b2c/siem#phone-authentication-failures) repository in GitHub contains artifacts you can use to create and publish reports, alerts, and dashboards based on Azure AD B2C logs. The draft workbook pictured below highlights phone-related failures.
+
+### Overview tab
+
+The following information is shown on the **Overview** tab:
+
+- Failure Reasons (the total number of failed phone authentications for each given reason)
+- Blocked Due to Bad Reputation
+- IP Address with Failed Phone Authentications (the total count of failed phone authentications for each given IP address)
+- Phone Numbers With IP address - Failed Phone Authentications
+- Browser (phone authentication failures per client browser)
+- Operating System (phone authentication failures per client operating system)
+
+![Overview tab](media/phone-based-mfa/overview-tab.png)
+
+### Details tab
+
+The following information is reported on the **Details** tab:
+
+- Azure AD B2C Policy - Failed Phone Authentications
+- Phone Authentication Failures by Phone Number ΓÇô Time Chart (adjustable timeline)
+- Phone Authentication Failures by Azure AD B2C Policy ΓÇô Time Chart (adjustable timeline)
+- Phone Authentication Failures by IP Address ΓÇô Time Chart (adjustable timeline)
+- Select Phone Number to View Failure Details (select a phone number for a detailed list of failures)
+
+![Details tab 1 of 3](media/phone-based-mfa/details-tab-1.png)
+
+![Details tab 2 of 3](media/phone-based-mfa/details-tab-2.png)
+
+![Details tab 3 of 3](media/phone-based-mfa/details-tab-3.png)
+
+## Use the workbook to identify fraudulent sign-ups
+
+You can use the workbook to understand phone-based MFA events and identify potentially malicious use of the telephony service.
+
+1. Understand whatΓÇÖs normal for your tenant by answering these questions:
+
+ - Where are the regions from which you expect phone-based MFA?
+ - Examine the reasons shown for failed phone-based MFA attempts; are they considered normal or expected?
+
+2. Recognize the characteristics of fraudulent sign-up:
+
+ - **Location-based**: Examine **Phone Authentication Failures by IP Address** for any accounts that are associated with locations from which you don't expect users to sign up.
+
+ > [!NOTE]
+ > The IP Address provided is an approximate region.
+
+ - **Velocity-based**: Look at **Failed Phone Authentications Overtime (Per Day)**, which indicates phone numbers that are making an abnormal number of failed phone authentication attempts per day, ordered from highest (left) to lowest (right).
+
+3. Mitigate fraudulent sign-ups by following the steps in the next section.
+
+
+## Mitigate fraudulent sign-ups
+
+Take the following actions to help mitigate fraudulent sign-ups.
+
+- Use the **Recommended** versions of user flows to do the following:
+
+ - [Enable the email one-time passcode feature (OTP)](phone-authentication-user-flows.md) for MFA (applies to both sign-up and sign-in flows).
+ - [Configure a Conditional Access policy](conditional-access-identity-protection-setup.md) to block sign-ins based on location (applies to sign-in flows only, not sign-up flows).
+ - Use API connectors to [integrate with an anti-bot solution like reCAPTCHA](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-captcha) (applies to sign-up flows).
+
+- Remove country codes that aren't relevant to your organization from the drop-down menu where the user verifies their phone number (this change will apply to future sign-ups):
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of your Azure AD B2C tenant.
+
+ 2. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
+
+ 3. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**.
+
+ 4. Select the user flow, and then select **Languages**. Select the language for your organization's geographic location to open the language details panel. (For this example, we'll select **English en** for the United States). Select **Multifactor authentication page**, and then select **Download defaults (en)**.
+
+ ![Upload new overrides to download defaults](media/phone-based-mfa/download-defaults.png)
+
+ 5. Open the JSON file that was downloaded in the previous step. In the file, search for `DEFAULT`, and replace the line with `"Value": "{\"DEFAULT\":\"Country/Region\",\"US\":\"United States\"}"`. Be sure to set `Overrides` to `true`.
+
+ > [!NOTE]
+ > You can customize the list of allowed country codes in the `countryList` element (see the [Phone factor authentication page example](localization-string-ids.md#phone-factor-authentication-page-example)).
+
+ 7. Save the JSON file. In the language details panel, under **Upload new overrides**, select the modified JSON file to upload it.
+
+ 8. Close the panel and select **Run user flow**. For this example, confirm that **United States** is the only country code available in the dropdown:
+
+ ![Country code drop-down](media/phone-based-mfa/country-code-drop-down.png)
+
+## Next steps
+
+- Learn about [Identity Protection and Conditional Access for Azure AD B2C](conditional-access-identity-protection-overview.md)
+
+- Apply [Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-oath-tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-oath-tokens.md
@@ -32,7 +32,7 @@ Some OATH TOTP hardware tokens are programmable, meaning they don't come with a
Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice.
-OATH TOTP hardware tokens typically come with a secret key, or seed, pre-programmed in the token. These keys must be input into Azure AD as described in the following steps. Secret keys are limited to 128 characters, which may not be compatible with all tokens. The secret key can only contain the characters *a-z* or *A-Z* and digits *1-7*, and must be encoded in *Base32*.
+OATH TOTP hardware tokens typically come with a secret key, or seed, pre-programmed in the token. These keys must be input into Azure AD as described in the following steps. Secret keys are limited to 128 characters, which may not be compatible with all tokens. The secret key can only contain the characters *a-z* or *A-Z* and digits *2-7*, and must be encoded in *Base32*.
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
@@ -44,7 +44,7 @@ Once tokens are acquired they must be uploaded in a comma-separated values (CSV)
```csv upn,serial number,secret key,time interval,manufacturer,model
-Helga@contoso.com,1234567,1234567abcdef1234567abcdef,60,Contoso,HardwareKey
+Helga@contoso.com,1234567,2234567abcdef1234567abcdef,60,Contoso,HardwareKey
``` > [!NOTE]
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-continuous-access-evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
@@ -45,7 +45,7 @@ Continuous access evaluation is implemented by enabling services, like Exchange
- Password for a user is changed or reset - Multi-factor authentication is enabled for the user - Administrator explicitly revokes all refresh tokens for a user-- Elevated user risk detected by Azure AD Identity Protection
+- High user risk detected by Azure AD Identity Protection
This process enables the scenario where users lose access to organizational SharePoint Online files, email, calendar, or tasks, and Teams from Microsoft 365 client apps within mins after one of these critical events.
@@ -184,4 +184,4 @@ Sign-in Frequency will be honored with or without CAE.
## Next steps
-[Announcing continuous access evaluation](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/moving-towards-real-time-policy-and-security-enforcement/ba-p/1276933)
+[Announcing continuous access evaluation](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/moving-towards-real-time-policy-and-security-enforcement/ba-p/1276933)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-app-manifest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-app-manifest.md
@@ -168,7 +168,8 @@ Configures the `groups` claim issued in a user or OAuth 2.0 access token that th
- `"None"` - `"SecurityGroup"` (for security groups and Azure AD roles) - `"ApplicationGroup"` (this option includes only groups that are assigned to the application)-- `"All"` (this will get all of the security groups, distribution groups, and Azure AD directory roles that the signed-in user is a member of.
+- `"DirectoryRole"` (gets the Azure AD directory roles the user is a member of)
+- `"All"` (this will get all of the security groups, distribution groups, and Azure AD directory roles that the signed-in user is a member of).
Example:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/reference-connect-dirsync-deprecated https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-dirsync-deprecated.md
@@ -36,6 +36,7 @@ Azure AD Connect is the successor to DirSync and Azure AD Sync. It combines all
| April 13, 2016 |Windows Azure Active Directory Sync (ΓÇ£DirSyncΓÇ¥) and Microsoft Azure Active Directory Sync (ΓÇ£Azure AD SyncΓÇ¥) are announced as deprecated. | | April 13, 2017 |Support ends. Customers will no longer be able to open a support case without upgrading to Azure AD Connect first. | |December 31, 2017|Azure AD may no longer accept communications from Windows Azure Active Directory Sync ("DirSync") and Microsoft Azure Active Directory Sync ("Azure AD Sync").
+|April 1st, 2021| Windows Azure Active Directory Sync ("DirSync") and Microsoft Azure Active Directory Sync ("Azure AD Sync") will no longer work |
## How to transition to Azure AD Connect If you are running DirSync, there are two ways you can upgrade: In-place upgrade and parallel deployment. An in-place upgrade is recommended for most customers and if you have a recent operating system and less than 50,000 objects. In other cases, it is recommended to do a parallel deployment where your DirSync configuration is moved to a new server running Azure AD Connect.
@@ -56,7 +57,7 @@ If you want to see how to do an in-place upgrade from DirSync to Azure AD Connec
The notification was also sent to customers using Azure AD Connect with a build number 1.0.\*.0 (using a pre-1.1 release). Microsoft recommends customers to stay current with Azure AD Connect releases. The [automatic upgrade](how-to-connect-install-automatic-upgrade.md) feature introduced in 1.1 makes it easy to always have a recent version of Azure AD Connect installed. **Q: Will DirSync/Azure AD Sync stop working on April 13, 2017?**
-DirSync/Azure AD Sync will continue to work on April 13, 2017. However, Azure AD may no longer accept communications from DirSync/Azure AD Sync after December 31, 2017.
+DirSync/Azure AD Sync will continue to work on April 13, 2017. However, Azure AD may no longer accept communications from DirSync/Azure AD Sync after December 31, 2017. Dirsync and Azure AD Sync will no longer work after April 1st, 2021
**Q: Which DirSync versions can I upgrade from?** It is supported to upgrade from any DirSync release currently being used.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/howto-migrate-vm-extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/howto-migrate-vm-extension.md
@@ -13,8 +13,9 @@ ms.devlang: na
na Previously updated : 02/25/2018 Last updated : 02/03/2020 + # How to stop using the virtual machine managed identities extension and start using the Azure Instance Metadata Service
active-directory https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-resource-roles-discover-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-discover-resources.md
@@ -61,7 +61,7 @@ When you first set up Privileged Identity Management for Azure resources, you ne
![Discovery pane with a resource selected and the Manage resource option highlighted](./media/pim-resource-roles-discover-resources/discovery-manage-resource.png)
-1. If you see a message to confirm the onboarding of the selected resource for management, select **Yes**.
+1. If you see a message to confirm the onboarding of the selected resource for management, select **Yes**. PIM will then be configured to manage all the new and existing child objects under the resource(s).
![Message confirming to onboard the selected resources for management](./media/pim-resource-roles-discover-resources/discovery-manage-resource-message.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
@@ -1041,18 +1041,19 @@ Can manage all aspects of the Exchange product.
| | | | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. | | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-| microsoft.directory/groups/unified/appRoleAssignments/update | Update groups.unified property in Azure Active Directory. |
-| microsoft.directory/groups/unified/basic/update | Update basic properties of Microsoft 365 groups. |
-| microsoft.directory/groups/unified/create | Create Microsoft 365 groups. |
-| microsoft.directory/groups/unified/delete | Delete Microsoft 365 groups. |
-| microsoft.directory/groups/unified/members/update | Update membership of Microsoft 365 groups. |
-| microsoft.directory/groups/unified/owners/update | Update ownership of Microsoft 365 groups. |
+| microsoft.directory/groups/hiddenMembers/read | Read hidden members of a group |
+| microsoft.directory/groups.unified/basic/update | Update basic properties of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/create | Create Microsoft 365 groups. |
+| microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups. |
+| microsoft.directory/groups.unified/restore | Restore Microsoft 365 groups |
+| microsoft.directory/groups.unified/members/update | Update membership of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/owners/update | Update ownership of Microsoft 365 groups. |
| microsoft.office365.exchange/allEntities/allTasks | Manage all aspects of Exchange Online. | | microsoft.office365.network/performance/allProperties/read | Read network performance pages in Microsoft 365 Admin Center. | | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-| microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
-| microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
+| microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
### External ID User Flow Administrator permissions
@@ -1290,23 +1291,24 @@ Can manage all aspects of the Intune product.
| microsoft.directory/devices/extensionAttributes/update | Update all values for devices.extensionAttributes property in Azure Active Directory. | | microsoft.directory/devices/registeredOwners/update | Update devices.registeredOwners property in Azure Active Directory. | | microsoft.directory/devices/registeredUsers/update | Update devices.registeredUsers property in Azure Active Directory. |
-| microsoft.directory/groups/appRoleAssignments/update | Update groups.appRoleAssignments property in Azure Active Directory. |
-| microsoft.directory/groups/basic/update | Update basic properties on groups in Azure Active Directory. |
-| microsoft.directory/groups/create | Create groups in Azure Active Directory. |
-| microsoft.directory/groups/createAsOwner | Create groups in Azure Active Directory. Creator is added as the first owner, and the created object counts against the creator's 250 created objects quota. |
-| microsoft.directory/groups/delete | Delete groups in Azure Active Directory. |
+| microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies |
+| microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies |
| microsoft.directory/groups/hiddenMembers/read | Read groups.hiddenMembers property in Azure Active Directory. |
-| microsoft.directory/groups/members/update | Update groups.members property in Azure Active Directory. |
-| microsoft.directory/groups/owners/update | Update groups.owners property in Azure Active Directory. |
-| microsoft.directory/groups/restore | Restore groups in Azure Active Directory. |
-| microsoft.directory/groups/settings/update | Update groups.settings property in Azure Active Directory. |
-| microsoft.directory/users/appRoleAssignments/update | Update users.appRoleAssignments property in Azure Active Directory. |
+| microsoft.directory/groups.security/basic/update | Update basic properties on groups in Azure Active Directory. |
+| microsoft.directory/groups.security/classification/update | Update classification property of the Security groups with the exclusion of role-assignable groups |
+| microsoft.directory/groups.security/create | Create groups in Azure Active Directory. |
+| microsoft.directory/groups.security/delete | Delete groups in Azure Active Directory. |
+| microsoft.directory/groups.security/dynamicMembershipRule/update | Update dynamicMembershipRule property of the Security groups with the exclusion of role-assignable groups |
+| microsoft.directory/groups.security/groupType/update | Update group type property of the Security groups with the exclusion of role-assignable groups |
+| microsoft.directory/groups.security/members/update | Update groups.members property in Azure Active Directory. |
+| microsoft.directory/groups.security/owners/update | Update groups.owners property in Azure Active Directory. |
+| microsoft.directory/groups.security/visibility/update | Update visibility property of the Security groups with the exclusion of role-assignable groups |
| microsoft.directory/users/basic/update | Update basic properties on users in Azure Active Directory. | | microsoft.directory/users/manager/update | Update users.manager property in Azure Active Directory. | | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. | | microsoft.intune/allEntities/allTasks | Manage all aspects of Intune. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-| microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
### Kaizala Administrator permissions
@@ -1453,13 +1455,18 @@ Do not use - not intended for general use.
| microsoft.directory/contacts/create | Create contacts in Azure Active Directory. | | microsoft.directory/contacts/delete | Delete contacts in Azure Active Directory. | | microsoft.directory/groups/create | Create groups in Azure Active Directory. |
-| microsoft.directory/groups/createAsOwner | Create groups in Azure Active Directory. Creator is added as the first owner, and the created object counts against the creator's 250 created objects quota. |
+| microsoft.directory/groups/delete | Delete groups, excluding role-assignable group |
| microsoft.directory/groups/members/update | Update groups.members property in Azure Active Directory. | | microsoft.directory/groups/owners/update | Update groups.owners property in Azure Active Directory. |
-| microsoft.directory/users/appRoleAssignments/update | Update users.appRoleAssignments property in Azure Active Directory. |
+| microsoft.directory/groups/restore | Restore deleted groups |
+| microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
+| microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments |
| microsoft.directory/users/assignLicense | Manage licenses on users in Azure Active Directory. | | microsoft.directory/users/basic/update | Update basic properties on users in Azure Active Directory. |
+| microsoft.directory/users/create | Add users |
| microsoft.directory/users/delete | Delete users in Azure Active Directory. |
+| microsoft.directory/users/disable | Disable users |
+| microsoft.directory/users/enable | Enable users |
| microsoft.directory/users/invalidateAllRefreshTokens | Invalidate all user refresh tokens in Azure Active Directory. | | microsoft.directory/users/manager/update | Update users.manager property in Azure Active Directory. | | microsoft.directory/users/password/update | Update passwords for all users in Azure Active Directory. See online documentation for more detail. |
@@ -1467,9 +1474,9 @@ Do not use - not intended for general use.
| microsoft.directory/users/userPrincipalName/update | Update users.userPrincipalName property in Azure Active Directory. | | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. | | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-| microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
| microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
### Partner Tier2 Support permissions
@@ -1493,16 +1500,25 @@ Do not use - not intended for general use.
| microsoft.directory/contacts/basic/update | Update basic properties on contacts in Azure Active Directory. | | microsoft.directory/contacts/create | Create contacts in Azure Active Directory. | | microsoft.directory/contacts/delete | Delete contacts in Azure Active Directory. |
-| microsoft.directory/domains/allTasks | Create and delete domains, and read and update standard properties in Azure Active Directory. |
+| microsoft.directory/domains/basic/allTasks | Create and delete domains, and read and update standard properties in Azure Active Directory. |
| microsoft.directory/groups/create | Create groups in Azure Active Directory. | | microsoft.directory/groups/delete | Delete groups in Azure Active Directory. | | microsoft.directory/groups/members/update | Update groups.members property in Azure Active Directory. |
+| microsoft.directory/groups/owners/update | Update owners of groups, excluding role-assignable groups |
| microsoft.directory/groups/restore | Restore groups in Azure Active Directory. |
+| microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
| microsoft.directory/organization/basic/update | Update basic properties on organization in Azure Active Directory. |
-| microsoft.directory/users/appRoleAssignments/update | Update users.appRoleAssignments property in Azure Active Directory. |
+| microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties |
+| microsoft.directory/roleDefinitions/allProperties/allTasks | Create and delete role definitions, and read and update all properties |
+| microsoft.directory/scopedRoleMemberships/allProperties/allTasks | Create and delete scopedRoleMemberships, and read and update all properties |
+| microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments |
+| microsoft.directory/subscribedSkus/standard/read | Read basic properties on subscriptions |
| microsoft.directory/users/assignLicense | Manage licenses on users in Azure Active Directory. | | microsoft.directory/users/basic/update | Update basic properties on users in Azure Active Directory. |
+| microsoft.directory/users/create | Add users |
| microsoft.directory/users/delete | Delete users in Azure Active Directory. |
+| microsoft.directory/users/disable | Disable users |
+| microsoft.directory/users/enable | Enable users |
| microsoft.directory/users/invalidateAllRefreshTokens | Invalidate all user refresh tokens in Azure Active Directory. | | microsoft.directory/users/manager/update | Update users.manager property in Azure Active Directory. | | microsoft.directory/users/password/update | Update passwords for all users in Azure Active Directory. See online documentation for more detail. |
@@ -1510,9 +1526,9 @@ Do not use - not intended for general use.
| microsoft.directory/users/userPrincipalName/update | Update users.userPrincipalName property in Azure Active Directory. | | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. | | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-| microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
| microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
### Password Administrator permissions
@@ -1780,18 +1796,18 @@ Can manage all aspects of the SharePoint service.
| | | | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. | | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-| microsoft.directory/groups/unified/appRoleAssignments/update | Update groups.unified property in Azure Active Directory. |
-| microsoft.directory/groups/unified/basic/update | Update basic properties of Microsoft 365 groups. |
-| microsoft.directory/groups/unified/create | Create Microsoft 365 groups. |
-| microsoft.directory/groups/unified/delete | Delete Microsoft 365 groups. |
-| microsoft.directory/groups/unified/members/update | Update membership of Microsoft 365 groups. |
-| microsoft.directory/groups/unified/owners/update | Update ownership of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/basic/update | Update basic properties of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/create | Create Microsoft 365 groups. |
+| microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups. |
+| microsoft.directory/groups.unified/members/update | Update membership of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/owners/update | Update ownership of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/restore | Restore Microsoft 365 groups |
| microsoft.office365.network/performance/allProperties/read | Read network performance pages in M365 Admin Center. | | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | | microsoft.office365.sharepoint/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.office365.sharepoint. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-| microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
-| microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
+| microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
### Teams Communications Administrator permissions
@@ -1875,16 +1891,19 @@ Can manage the Microsoft Teams service.
| microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. | | microsoft.directory/groups/hiddenMembers/read | Read groups.hiddenMembers property in Azure Active Directory. | | microsoft.directory/groups/unified/appRoleAssignments/update | Update groups.unified property in Azure Active Directory. |
-| microsoft.directory/groups/unified/basic/update | Update basic properties of Microsoft 365 groups. |
-| microsoft.directory/groups/unified/create | Create Microsoft 365 groups. |
-| microsoft.directory/groups/unified/delete | Delete Microsoft 365 groups. |
-| microsoft.directory/groups/unified/members/update | Update membership of Microsoft 365 groups. |
-| microsoft.directory/groups/unified/owners/update | Update ownership of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/basic/update | Update basic properties of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/create | Create Microsoft 365 groups. |
+| microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups. |
+| microsoft.directory/groups.unified/members/update | Update membership of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/owners/update | Update ownership of Microsoft 365 groups. |
+| microsoft.directory/groups.unified/restore | Restore Microsoft 365 groups |
+| microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant consent to delegated permissions on behalf of a group |
| microsoft.office365.network/performance/allProperties/read | Read network performance pages in M365 Admin Center. | | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. |
+| microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
| microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-| microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
-| microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
+| microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
| microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams. | ### Usage Summary Reports Reader permissions
@@ -2045,24 +2064,24 @@ Workplace Device Join | Deprecated | [Deprecated roles documentation](permission
Column headings represent the roles that can reset passwords. Table rows contain the roles for which their password can be reset.
-Password can be reset | Authentication Admin | Helpdesk Admin | Password Admin | User Admin | Privileged Authentication Admin | Global Admin
+Password can be reset | Password Admin | Helpdesk Admin | Authentication Admin | User Admin | Privileged Authentication Admin | Global Admin
| | | | | |
-Authentication Admin | :heavy_check_mark: |   |   |   | :heavy_check_mark: | :heavy_check_mark:
+Authentication Admin |   |   | :heavy_check_mark: |   | :heavy_check_mark: | :heavy_check_mark:
Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Global Admin |   |   |   |   | :heavy_check_mark: | :heavy_check_mark:\* Groups Admin |   |   |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Guest | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Helpdesk Admin |   | :heavy_check_mark: |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Message Center Reader | :heavy_check_mark: | :heavy_check_mark: |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Message Center Reader |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Privileged Authentication Admin |   |   |   |   | :heavy_check_mark: | :heavy_check_mark: Privileged Role Admin |   |   |   |   | :heavy_check_mark: | :heavy_check_mark:
-Reports Reader | :heavy_check_mark: | :heavy_check_mark: |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Reports Reader |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
Restricted Guest | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User (no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User Admin |   |   |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Usage Summary Reports Reader |   | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
\* A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has 0 Global Administrators.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/kendis-scaling-agile-platform-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kendis-scaling-agile-platform-tutorial.md
@@ -0,0 +1,172 @@
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Kendis-Scaling Agile Platform | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Kendis-Scaling Agile Platform.
++++++++ Last updated : 01/28/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Kendis-Scaling Agile Platform
+
+In this tutorial, you'll learn how to integrate Kendis-Scaling Agile Platform with Azure Active Directory (Azure AD). When you integrate Kendis-Scaling Agile Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to Kendis-Scaling Agile Platform.
+* Enable your users to be automatically signed-in to Kendis-Scaling Agile Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Kendis-Scaling Agile Platform single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Kendis-Scaling Agile Platform supports **SP and IDP** initiated SSO
+* Kendis-Scaling Agile Platform supports **Just In Time** user provisioning
++
+## Adding Kendis-Scaling Agile Platform from the gallery
+
+To configure the integration of Kendis-Scaling Agile Platform into Azure AD, you need to add Kendis-Scaling Agile Platform from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Kendis-Scaling Agile Platform** in the search box.
+1. Select **Kendis-Scaling Agile Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Kendis-Scaling Agile Platform
+
+Configure and test Azure AD SSO with Kendis-Scaling Agile Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Kendis-Scaling Agile Platform.
+
+To configure and test Azure AD SSO with Kendis-Scaling Agile Platform, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Kendis-Scaling Agile Platform SSO](#configure-kendis-scaling-agile-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Kendis-Scaling Agile Platform test user](#create-kendis-scaling-agile-platform-test-user)** - to have a counterpart of B.Simon in Kendis-Scaling Agile Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Kendis-Scaling Agile Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.kendis.io`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.kendis.io/login/saml`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.kendis.io/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Kendis-Scaling Agile Platform Client support team](mailto:support@kendis.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Kendis-Scaling Agile Platform** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Kendis-Scaling Agile Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Kendis-Scaling Agile Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Kendis-Scaling Agile Platform SSO
+
+1. In a different web browser window, sign in to your Kendis-Scaling Agile Platform company site as an administrator.
+
+1. Go to the **Settings > SAML Configurations**.
+
+ ![settings to SAML Configurations](./media/kendis-scaling-agile-platform-tutorial/settings.png)
+
+1. Click on **Edit** button at the bottom of the page and perform the following steps.
+
+ ![SAML Configurations](./media/kendis-scaling-agile-platform-tutorial/saml-configuration-settings.png)
+
+ a. Copy **Callback URL** value, paste this value into the **Reply URL** text box in the Basic SAML Configuration section in the Azure portal.
+
+ b. In the **Identity Provider Single Sign On URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. In the **Identity Provider Issuer** textbox, paste the **Azure AD Identifier(Entity ID)** value which you have copied from the Azure portal.
+
+ d. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **X.509 Certificate** textbox.
+
+ e. **Select Default Group** from the list of options.
+
+ f. Click **Save**.
+
+### Create Kendis-Scaling Agile Platform test user
+
+In this section, a user called Britta Simon is created in Kendis-Scaling Agile Platform. Kendis-Scaling Agile Platform supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Kendis-Scaling Agile Platform, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Kendis-Scaling Agile Platform Sign on URL where you can initiate the login flow.
+
+* Go to Kendis-Scaling Agile Platform Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Kendis-Scaling Agile Platform for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Kendis-Scaling Agile Platform tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Kendis-Scaling Agile Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Kendis-Scaling Agile Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sap-successfactors-writeback-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
@@ -276,7 +276,7 @@ In this section, you will configure how user data flows from SuccessFactors to A
| 3 | 8448 | emailType | This constant value is the SuccessFactors ID value associated with business email. Update this value to match your SuccessFactors environment. See the section [Retrieve constant value for emailType](#retrieve-constant-value-for-emailtype) for steps to set this value. | | 4 | true | emailIsPrimary | Use this attribute to set business email as primary in SuccessFactors. If business email is not primary, set this flag to false. | | 5 | userPrincipalName | [custom01 ΓÇô custom15] | Using **Add New Mapping**, you can optionally write userPrincipalName or any Azure AD attribute to a custom attribute available in the SuccessFactors User object. |
- | 6 | on-prem-samAccountName | username | Using **Add New Mapping**, you can optionally map on-premises samAccountName to SuccessFactors username attribute. |
+ | 6 | On Prem SamAccountName | username | Using **Add New Mapping**, you can optionally map on-premises samAccountName to SuccessFactors username attribute. Use [Azure AD Connect sync: Directory extensions](../hybrid/how-to-connect-sync-feature-directory-extensions.md) to sync samAccountName to Azure AD. It will appear in the source drop down as *extension_yourTenantGUID_samAccountName* |
| 7 | SSO | loginMethod | If SuccessFactors tenant is setup for [partial SSO](https://apps.support.sap.com/sap/support/knowledge/en/2320766), then using Add New Mapping, you can optionally set loginMethod to a constant value of "SSO" or "PWD". | | 8 | telephoneNumber | businessPhoneNumber | Use this mapping to flow *telephoneNumber* from Azure AD to SuccessFactors business / work phone number. | | 9 | 10605 | businessPhoneType | This constant value is the SuccessFactors ID value associated with business phone. Update this value to match your SuccessFactors environment. See the section [Retrieve constant value for phoneType](#retrieve-constant-value-for-phonetype) for steps to set this value. |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/servicenow-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
@@ -1,6 +1,6 @@
Title: 'Tutorial: Configure ServiceNow for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to ServiceNow.
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to ServiceNow.
@@ -15,148 +15,154 @@
# Tutorial: Configure ServiceNow for automatic user provisioning
-This tutorial describes the steps you need to perform in both ServiceNow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [ServiceNow](https://www.servicenow.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps that you perform in both ServiceNow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When Azure AD is configured, it automatically provisions and deprovisions users and groups to [ServiceNow](https://www.servicenow.com/) by using the Azure AD provisioning service.
+For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-## Capabilities Supported
+
+## Capabilities supported
> [!div class="checklist"] > * Create users in ServiceNow
-> * Remove users in ServiceNow when they do not require access anymore
+> * Remove users in ServiceNow when they don't need access anymore
> * Keep user attributes synchronized between Azure AD and ServiceNow > * Provision groups and group memberships in ServiceNow
-> * [Single sign-on](servicenow-tutorial.md) to ServiceNow (recommended)
+> * Allow [single sign-on](servicenow-tutorial.md) to ServiceNow (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites: * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (Application Administrator, Cloud Application Administrator, Application Owner, or Global Administrator)
* A [ServiceNow instance](https://www.servicenow.com/) of Calgary or higher * A [ServiceNow Express instance](https://www.servicenow.com/) of Helsinki or higher * A user account in ServiceNow with the admin role
-## Step 1. Plan your provisioning deployment
+## Step 1: Plan your provisioning deployment
1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 3. Determine what data to [map between Azure AD and ServiceNow](../app-provisioning/customize-application-attributes.md).
-## Step 2. Configure ServiceNow to support provisioning with Azure AD
+## Step 2: Configure ServiceNow to support provisioning with Azure AD
+
+1. Identify your ServiceNow instance name. You can find the instance name in the URL that you use to access ServiceNow. In the following example, the instance name is **dev35214**.
-1. Identify your ServiceNow instance name. You can find the instance name in the URL that you use to access ServiceNow. In the example below, the instance name is dev35214.
+ ![Screenshot that shows a ServiceNow instance.](media/servicenow-provisioning-tutorial/servicenow-instance.png)
- ![ServiceNow Instance](media/servicenow-provisioning-tutorial/servicenow-instance.png)
+2. Obtain credentials for an admin in ServiceNow. Go to the user profile in ServiceNow and verify that the user has the admin role.
-2. Obtain credentials for an admin in ServiceNow. Navigate to the user profile in ServiceNow and verify that the user has the admin role.
+ ![Screenshot that shows a ServiceNow admin role.](media/servicenow-provisioning-tutorial/servicenow-admin-role.png)
- ![ServiceNow admin role](media/servicenow-provisioning-tutorial/servicenow-admin-role.png)
+## Step 3: Add ServiceNow from the Azure AD application gallery
-## Step 3. Add ServiceNow from the Azure AD application gallery
+Add ServiceNow from the Azure AD application gallery to start managing provisioning to ServiceNow. If you previously set up ServiceNow for single sign-on (SSO), you can use the same application. However, we recommend that you create a separate app when you're testing the integration. [Learn more about adding an application from the gallery](../manage-apps/add-application-portal.md).
-Add ServiceNow from the Azure AD application gallery to start managing provisioning to ServiceNow. If you have previously setup ServiceNow for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+## Step 4: Define who will be in scope for provisioning
-## Step 4. Define who will be in scope for provisioning
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application, or based on attributes of the user or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the [steps to assign users and groups to the application](../manage-apps/assign-user-or-group-access-portal.md). If you choose to scope who will be provisioned based solely on attributes of the user or group, you can [use a scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+Keep these tips in mind:
-* When assigning users and groups to ServiceNow, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* When you're assigning users and groups to ServiceNow, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When the scope is set to all users and groups, you can specify an [attribute-based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-## Step 5. Configure automatic user provisioning to ServiceNow
+## Step 5: Configure automatic user provisioning to ServiceNow
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in TestApp. You can base the configuration on user and group assignments in Azure AD.
-### To configure automatic user provisioning for ServiceNow in Azure AD:
+To configure automatic user provisioning for ServiceNow in Azure AD:
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-2. In the applications list, select **ServiceNow**.
+2. In the list of applications, select **ServiceNow**.
- ![The ServiceNow link in the Applications list](common/all-applications.png)
+ ![Screenshot that shows a list of applications.](common/all-applications.png)
3. Select the **Provisioning** tab. ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+4. Set **Provisioning Mode** to **Automatic**.
- ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
+ ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your ServiceNow admin credentials and username. Click **Test Connection** to ensure Azure AD can connect to ServiceNow. If the connection fails, ensure your ServiceNow account has Admin permissions and try again.
+5. In the **Admin Credentials** section, enter your ServiceNow admin credentials and username. Select **Test Connection** to ensure that Azure AD can connect to ServiceNow. If the connection fails, ensure that your ServiceNow account has admin permissions and try again.
- ![Screenshot shows the Service Provisioning page, where you can enter Admin Credentials.](./media/servicenow-provisioning-tutorial/servicenow-provisioning.png)
+ ![Screenshot that shows the Service Provisioning page, where you can enter admin credentials.](./media/servicenow-provisioning-tutorial/servicenow-provisioning.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+6. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
7. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to ServiceNow**.
+8. In the **Mappings** section, select **Synchronize Azure Active Directory Users to ServiceNow**.
-9. Review the user attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in ServiceNow for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the ServiceNow API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in ServiceNow for update operations.
+
+ If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the ServiceNow API supports filtering users based on that attribute.
+
+ Select the **Save** button to commit any changes.
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to ServiceNow**.
+10. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to ServiceNow**.
11. Review the group attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in ServiceNow for update operations. Select the **Save** button to commit any changes.
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for ServiceNow, change the **Provisioning Status** to **On** in the **Settings** section.
+13. To enable the Azure AD provisioning service for ServiceNow, change **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to ServiceNow by choosing the desired values in **Scope** in the **Settings** section.
+14. Define the users and groups that you want to provision to ServiceNow by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
-15. When you are ready to provision, click **Save**.
+15. When you're ready to provision, select **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles. Subsequent cycles occur about every 40 minutes, as long as the Azure AD provisioning service is running.
-## Step 6. Monitor your deployment
-Once you've configured provisioning, use the following resources to monitor your deployment:
+## Step 6: Monitor your deployment
+After you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+- Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
+- Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion.
+- If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. [Learn more about quarantine states](../app-provisioning/application-provisioning-quarantine-status.md).
-## Troubleshooting Tips
-* **InvalidLookupReference:** When provisioning certain attributes such as Department and Location in ServiceNow, the values must already exist in a reference table in ServiceNow. For example, you may have two locations (Seattle, Los Angeles) and three departments (Sales, Finance, Marketing) in the **insert table name** table in ServiceNow. If you attempt to provision a user where his department is "Sales" and location is "Seattle" he will be provisioned successfully. If you attempt to provision a user with department "Sales" and location "LA" the user won't be provisioned. The location LA must either be added to the reference table in ServiceNow or the user attribute in Azure AD must be updated to match the format in ServiceNow.
-* **EntryJoiningPropertyValueIsMissing:** Review your [attribute mappings](../app-provisioning/customize-application-attributes.md) to identify the matching attribute. This value must be present on the user or group you're attempting to provision.
-* Review the [ServiceNow SOAP API](https://docs.servicenow.com/bundle/newyork-application-development/page/integrate/web-services-apis/reference/r_DirectWebServiceAPIFunctions.html) to understand any requirements or limitations (for example, format to specify country code for a user)
-* Provisioning requests are sent by default to https://{your-instance-name}.service-now.com/{table-name}. If you require a custom tenant URL, you can provide the entire URL in the instance name field.
-* **ServiceNowInstanceInvalid**
+## Troubleshooting tips
+* When you're provisioning certain attributes (such as **Department** and **Location**) in ServiceNow, the values must already exist in a reference table in ServiceNow. If they don't, you'll get an **InvalidLookupReference** error.
+
+ For example, you might have two locations (Seattle, Los Angeles) and three departments (Sales, Finance, Marketing) in a certain table in ServiceNow. If you try to provision a user whose department is "Sales" and whose location is "Seattle," that user will be provisioned successfully. If you try to provision a user whose department is "Sales" and whose location is "LA," the user won't be provisioned. The location "LA" must be added to the reference table in ServiceNow, or the user attribute in Azure AD must be updated to match the format in ServiceNow.
+* If you get an **EntryJoiningPropertyValueIsMissing** error, review your [attribute mappings](../app-provisioning/customize-application-attributes.md) to identify the matching attribute. This value must be present on the user or group you're trying to provision.
+* To understand any requirements or limitations (for example, the format to specify a country code for a user), review the [ServiceNow SOAP API](https://docs.servicenow.com/bundle/newyork-application-development/page/integrate/web-services-apis/reference/r_DirectWebServiceAPIFunctions.html).
+* Provisioning requests are sent by default to https://{your-instance-name}.service-now.com/{table-name}. If you need a custom tenant URL, you can provide the entire URL as the instance name.
+* The **ServiceNowInstanceInvalid** error indicates a problem communicating with the ServiceNow instance. Here's the text of the error:
`Details: Your ServiceNow instance name appears to be invalid. Please provide a current ServiceNow administrative user name and password along with the name of a valid ServiceNow instance.`
- This error indicates an issue communicating with the ServiceNow instance.
-
- If you are having test connection issues try making the following settings as **disabled** in ServiceNow:
+ If you're having test connection problems, try selecting **No** for the following settings in ServiceNow:
- 1. Select **System Security** > **High security settings** > **Require basic authentication for incoming SCHEMA requests**.
- 2. Select **System Properties** > **Web Services** > **Require basic authorization for incoming SOAP requests**.
-
- ![Authorizing SOAP request](media/servicenow-provisioning-tutorial/servicenow-webservice.png)
+ - **System Security** > **High security settings** > **Require basic authentication for incoming SCHEMA requests**
+ - **System Properties** > **Web Services** > **Require basic authorization for incoming SOAP requests**
- If it resolves your issues then contact ServiceNow support and ask them to turn on SOAP debugging to help troubleshoot.
+ ![Screenshot that shows the option for authorizing SOAP requests.](media/servicenow-provisioning-tutorial/servicenow-webservice.png)
-* **IP Ranges**
+ If you still can't resolve your problem, contact ServiceNow support and ask them to turn on SOAP debugging to help troubleshoot.
- The Azure AD provisioning service currently operates under a particular IP ranges.So if required you can restrict other IP ranges and add these particular IP ranges to the allowlist of your application to allow traffic flow from Azure AD provisioning service to your application .Refer the documentation at [IP Ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges).
+* The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allow list of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What are application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/snowflake-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md
@@ -1,6 +1,6 @@
Title: 'Tutorial: Configure Snowflake for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Snowflake.
+description: Learn how to configure Azure Active Directory to automatically provision and deprovision user accounts to Snowflake.
writer: zchia
@@ -15,98 +15,101 @@
# Tutorial: Configure Snowflake for automatic user provisioning
-The objective of this tutorial is to demonstrate the steps to be performed in Snowflake and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [Snowflake](https://www.Snowflake.com/pricing/). For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-
+This tutorial demonstrates the steps that you perform in Snowflake and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and deprovision users and groups to [Snowflake](https://www.Snowflake.com/pricing/). For important details on what this service does, how it works, and frequently asked questions, see [What is automated SaaS app user provisioning in Azure AD?](../app-provisioning/user-provisioning.md).
> [!NOTE]
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in public preview. For information about terms of use, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Capabilities supported > [!div class="checklist"] > * Create users in Snowflake
-> * Remove users in Snowflake when they do not require access anymore
+> * Remove users in Snowflake when they don't require access anymore
> * Keep user attributes synchronized between Azure AD and Snowflake > * Provision groups and group memberships in Snowflake
-> * [Single sign-on](./snowflake-tutorial.md) to Snowflake (recommended)
+> * Allow [single sign-on](./snowflake-tutorial.md) to Snowflake (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* [A Snowflake tenant](https://www.Snowflake.com/pricing/).
-* A user account in Snowflake with Admin permissions.
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (Application Administrator, Cloud Application Administrator, Application Owner, or Global Administrator)
+* [A Snowflake tenant](https://www.Snowflake.com/pricing/)
+* A user account in Snowflake with admin permissions
-## Step 1. Plan your provisioning deployment
+## Step 1: Plan your provisioning deployment
1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 3. Determine what data to [map between Azure AD and Snowflake](../app-provisioning/customize-application-attributes.md).
-## Step 2. Configure Snowflake to support provisioning with Azure AD
+## Step 2: Configure Snowflake to support provisioning with Azure AD
+
+Before you configure Snowflake for automatic user provisioning with Azure AD, you need to enable System for Cross-domain Identity Management (SCIM) provisioning on Snowflake.
-Before configuring Snowflake for automatic user provisioning with Azure AD, you will need to enable SCIM provisioning on Snowflake.
+1. Sign in to your Snowflake admin console. Enter the following query in the highlighted worksheet, and then select **Run**.
-1. Sign in to your Snowflake Admin Console. Enter the query shown below in the worksheet highlighted and click **Run**.
+ ![Screenshot of the Snowflake admin console with query and Run button.](media/Snowflake-provisioning-tutorial/image00.png)
- ![Snowflake Admin Console](media/Snowflake-provisioning-tutorial/image00.png)
+2. A SCIM access token is generated for your Snowflake tenant. To retrieve it, select the link highlighted in the following screenshot.
-2. A SCIM Access Token will be generated for your Snowflake tenant. To retrieve it, click on the link highlighted below.
+ ![Screenshot of a worksheet in the Snowflake U I with the S C I M access token called out.](media/Snowflake-provisioning-tutorial/image01.png)
- ![Screenshot of a worksheet in the Snowflake U I with the S C I M Access token called out.](media/Snowflake-provisioning-tutorial/image01.png)
+3. Copy the generated token value and select **Done**. This value is entered in the **Secret Token** box on the **Provisioning** tab of your Snowflake application in the Azure portal.
-3. Copy the generated token value and click **Done**. This value will be entered in the **Secret Token** field in the Provisioning tab of your Snowflake application in the Azure portal.
+ ![Screenshot of the Details section, showing the token copied into the text field and the Done option called out.](media/Snowflake-provisioning-tutorial/image02.png)
- ![Screenshot of the Details section showing the token copied into the text field and the Done option called out.](media/Snowflake-provisioning-tutorial/image02.png)
+## Step 3: Add Snowflake from the Azure AD application gallery
-## Step 3. Add Snowflake from the Azure AD application gallery
+Add Snowflake from the Azure AD application gallery to start managing provisioning to Snowflake. If you previously set up Snowflake for single sign-on (SSO), you can use the same application. However, we recommend that you create a separate app when you're initially testing the integration. [Learn more about adding an application from the gallery](../manage-apps/add-application-portal.md).
-Add Snowflake from the Azure AD application gallery to start managing provisioning to Snowflake. If you have previously setup Snowflake for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+## Step 4: Define who will be in scope for provisioning
-## Step 4. Define who will be in scope for provisioning
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application, or based on attributes of the user or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the [steps to assign users and groups to the application](../manage-apps/assign-user-or-group-access-portal.md). If you choose to scope who will be provisioned based solely on attributes of the user or group, you can [use a scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+Keep these tips in mind:
-* When assigning users and groups to Snowflake, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* When you're assigning users and groups to Snowflake, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When the scope is set to all users and groups, you can specify an [attribute-based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-## Step 5. Configure automatic user provisioning to Snowflake
+## Step 5: Configure automatic user provisioning to Snowflake
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Snowflake based on user and/or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Snowflake. You can base the configuration on user and group assignments in Azure AD.
-### To configure automatic user provisioning for Snowflake in Azure AD:
+To configure automatic user provisioning for Snowflake in Azure AD:
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-2. In the applications list, select **Snowflake**.
+2. In the list of applications, select **Snowflake**.
- ![The Snowflake link in the Applications list](common/all-applications.png)
+ ![Screenshot that shows a list of applications.](common/all-applications.png)
3. Select the **Provisioning** tab. ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+4. Set **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
- ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
+5. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
-5. Under the Admin Credentials section, input the **SCIM 2.0 base URL and Authentication Token** values retrieved earlier in **Tenant URL** and **Secret Token** fields respectively. Click **Test Connection** to ensure Azure AD can connect to Snowflake. If the connection fails, ensure your Snowflake account has Admin permissions and try again.
+ Select **Test Connection** to ensure that Azure AD can connect to Snowflake. If the connection fails, ensure that your Snowflake account has admin permissions and try again.
- ![Tenant URL + Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot that shows boxes for tenant U R L and secret token, along with the Test Connection button.](common/provisioning-testconnection-tenanturltoken.png)
-7. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
+6. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
-8. Click **Save**.
+7. Select **Save**.
-9. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Snowflake**.
+8. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Snowflake**.
-10. Review the user attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Snowflake for update operations. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Snowflake for update operations. Select the **Save** button to commit any changes.
|Attribute|Type| |||
@@ -119,56 +122,56 @@ This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:defaultRole|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:defaultWarehouse|String|
-11. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Snowflake**.
+10. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Snowflake**.
-12. Review the group attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Snowflake for update operations. Select the **Save** button to commit any changes.
+11. Review the group attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Snowflake for update operations. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |||
- |displayName|String|
- |members|Reference|
+ |Attribute|Type|
+ |||
+ |displayName|String|
+ |members|Reference|
-13. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-14. To enable the Azure AD provisioning service for Snowflake, change the **Provisioning Status** to **On** in the **Settings** section.
+13. To enable the Azure AD provisioning service for Snowflake, change **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
-15. Define the users and/or groups that you would like to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section. If this option is not available, please configure the required fields under Admin Credentials, Click **Save** and refresh the page.
+14. Define the users and groups that you want to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ If this option is not available, configure the required fields under **Admin Credentials**, select **Save**, and refresh the page.
-16. When you are ready to provision, click **Save**.
+ ![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+15. When you're ready to provision, select **Save**.
- This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+ ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
-## Step 6. Monitor your deployment
-Once you've configured provisioning, use the following resources to monitor your deployment:
+This operation starts the initial synchronization of all users and groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs. Subsequent syncs occur about every 40 minutes, as long as the Azure AD provisioning service is running.
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Step 6: Monitor your deployment
+After you've configured provisioning, use the following resources to monitor your deployment:
-## Connector limitations
+- Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
+- Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion.
+- If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. [Learn more about quarantine states](../app-provisioning/application-provisioning-quarantine-status.md).
-* Snowflake generated SCIM tokens expire in 6 months. Be aware that these need to be refreshed before they expire to allow the provisioning syncs to continue working.
+## Connector limitations
-## Troubleshooting Tips
+Snowflake-generated SCIM tokens expire in 6 months. Be aware that you need to refresh these tokens before they expire, to allow the provisioning syncs to continue working.
-* **IP Ranges**
+## Troubleshooting tips
- The Azure AD provisioning service currently operates under a particular IP ranges. So if required you can restrict other IP ranges and add these particular IP ranges to the allowlist of your application to allow traffic flow from Azure AD provisioning service to your application .Refer the documentation at [IP Ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges).
+The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allow list of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
-## Change Log
+## Change log
-* 07/21/2020 - Enabled soft-delete for all users (via the active attribute).
+* 07/21/2020: Enabled soft-delete for all users (via the active attribute).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md).
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What are application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks https://docs.microsoft.com/en-us/azure/aks/aks-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md
@@ -16,7 +16,7 @@ This document can be used to help support the following scenarios:
* Migrating an AKS Cluster backed by [Availability Sets](../virtual-machines/windows/tutorial-availability-sets.md) to [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md) * Migrating an AKS cluster to use a [Standard SKU load balancer](./load-balancer-standard.md) * Migrating from [Azure Container Service (ACS) - retiring January 31, 2020](https://azure.microsoft.com/updates/azure-container-service-will-retire-on-january-31-2020/) to AKS
-* Migrating from [AKS engine](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview?view=azs-1908) to AKS
+* Migrating from [AKS engine](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) to AKS
* Migrating from non-Azure based Kubernetes clusters to AKS * Moving existing resources to a different region
aks https://docs.microsoft.com/en-us/azure/aks/howto-deploy-java-liberty-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/howto-deploy-java-liberty-app.md
@@ -26,7 +26,7 @@ This guide demonstrates how to run your Java, Java EE, [Jakarta EE](https://jaka
## Create a resource group
-An Azure resource group is a logical group in which Azure resources are deployed and managed. Create a resource group, *java-liberty-project* using the [az group create](/cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_create) command in the *eastus* location. It will be used for creating the Azure Container Registry (ACR) instance and the AKS cluster later.
+An Azure resource group is a logical group in which Azure resources are deployed and managed. Create a resource group, *java-liberty-project* using the [az group create](/cli/azure/group#az_group_create) command in the *eastus* location. It will be used for creating the Azure Container Registry (ACR) instance and the AKS cluster later.
```azurecli-interactive az group create --name java-liberty-project --location eastus
@@ -34,7 +34,7 @@ az group create --name java-liberty-project --location eastus
## Create an ACR instance
-Use the [az acr create](/cli/azure/acr?view=azure-cli-latest&preserve-view=true#az_acr_create) command to create the ACR instance. The following example creates an ACR instance named *youruniqueacrname*. Make sure *youruniqueacrname* is unique within Azure.
+Use the [az acr create](/cli/azure/acr#az_acr_create) command to create the ACR instance. The following example creates an ACR instance named *youruniqueacrname*. Make sure *youruniqueacrname* is unique within Azure.
```azurecli-interactive az acr create --resource-group java-liberty-project --name youruniqueacrname --sku Basic --admin-enabled
@@ -65,7 +65,7 @@ You should see `Login Succeeded` at the end of command output if you have logged
## Create an AKS cluster
-Use the [az aks create](/cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
+Use the [az aks create](/cli/azure/aks#az_aks_create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
```azurecli-interactive az aks create --resource-group java-liberty-project --name myAKSCluster --node-count 1 --generate-ssh-keys --enable-managed-identity
@@ -82,13 +82,13 @@ After a few minutes, the command completes and returns JSON-formatted informatio
### Connect to the AKS cluster
-To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli](/cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_install_cli) command:
+To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli](/cli/azure/aks#az_aks_install_cli) command:
```azurecli-interactive az aks install-cli ```
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_get_credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az_aks_get_credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurecli-interactive az aks get-credentials --resource-group java-liberty-project --name myAKSCluster --overwrite-existing
@@ -208,14 +208,14 @@ Wait until the *EXTERNAL-IP* address changes from *pending* to an actual public
Open a web browser to the external IP address and port of your service (`52.152.189.57:9080` for the above example) to see the application home page. You should see the pod name of your application replicas displayed at the top-left of the page. Wait for a few minutes and refresh the page, you will probably see a different pod name displayed due to load balancing provided by the AKS cluster. >[!NOTE] > - Currently the application is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](ingress-own-tls.md). ## Clean up the resources
-To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_delete) command to remove the resource group, container service, container registry, and all related resources.
+To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, container registry, and all related resources.
```azurecli-interactive az group delete --name java-liberty-project --yes --no-wait
aks https://docs.microsoft.com/en-us/azure/aks/private-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
@@ -122,7 +122,7 @@ As mentioned, virtual network peering is one way to access your private cluster.
## Limitations * IP authorized ranges can't be applied to the private api server endpoint, they only apply to the public API server * [Azure Private Link service limitations][private-link-service] apply to private clusters.
-* No support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider to use [Self-hosted Agents](/azure/devops/pipelines/agents/agents?preserve-view=true&tabs=browser&view=azure-devops).
+* No support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider to use [Self-hosted Agents](/azure/devops/pipelines/agents/agents?tabs=browser).
* For customers that need to enable Azure Container Registry to work with private AKS, the Container Registry virtual network must be peered with the agent cluster virtual network. * No support for converting existing AKS clusters into private clusters * Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning.
analysis-services https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-datasource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-datasource.md
@@ -4,7 +4,7 @@ description: Describes data sources and connectors supported for tabular 1200 an
Previously updated : 02/02/2021 Last updated : 02/03/2021
@@ -31,7 +31,7 @@ Data sources and connectors shown in Get Data or Table Import Wizard in Visual S
**Notes:** <a name="tab1400a">1</a> - Tabular 1400 and higher models only.
-<a name="azprovider">2</a> - When specified as a *provider* data source in tabular 1200 and higher models, both in-memory and DirectQuery models require Microsoft OLE DB Driver for SQL Server MSOLEDBSQL (recommended), SQL Server Native Client 11.0, or .NET Framework Data Provider for SQL Server.
+<a name="azprovider">2</a> - When specified as a *provider* data source in tabular 1200 and higher models, both in-memory and DirectQuery models require Microsoft OLE DB Driver for SQL Server MSOLEDBSQL (recommended) or .NET Framework Data Provider for SQL Server.
<a name="azsqlmanaged">3</a> - Azure SQL Managed Instance is supported. Because SQL Managed Instance runs within Azure VNet with a private IP address, public endpoint must be enabled on the instance. If not enabled, an [On-premises data gateway](analysis-services-gateway.md) is required. <a name="databricks">4</a> - Azure Databricks using the Spark connector is currently not supported. <a name="gen2">5</a> - ADLS Gen2 connector is currently not supported, however, Azure Blob Storage connector can be used with an ADLS Gen2 data source.
@@ -169,4 +169,4 @@ To enable Oracle managed provider:
## Next steps * [On-premises gateway](analysis-services-gateway.md)
-* [Manage your server](analysis-services-manage.md)
+* [Manage your server](analysis-services-manage.md)
api-management https://docs.microsoft.com/en-us/azure/api-management/how-to-configure-local-metrics-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-configure-local-metrics-logs.md
@@ -1,6 +1,6 @@
Title: Configure local metrics and logs for Azure API Management self-hosted gateway | Microsoft Docs
-description: Learn how to configure local metrics and logs for Azure API Management self-hosted gateway
+description: Learn how to configure local metrics and logs for Azure API Management self-hosted gateway on a Kubernetes custer
documentationcenter: ''
@@ -11,14 +11,14 @@
na Previously updated : 04/30/2020 Last updated : 02/01/2021 # Configure local metrics and logs for Azure API Management self-hosted gateway
-This article provides details for configuring local metrics and logs for the [self-hosted gateway](./self-hosted-gateway-overview.md). For configuring cloud metrics and logs, see [this article](how-to-configure-cloud-metrics-logs.md).
+This article provides details for configuring local metrics and logs for the [self-hosted gateway](./self-hosted-gateway-overview.md) deployed on a Kubernetes cluster. For configuring cloud metrics and logs, see [this article](how-to-configure-cloud-metrics-logs.md).
## Metrics The self-hosted gateway supports [StatsD](https://github.com/statsd/statsd), which has become a unifying protocol for metrics collection and aggregation. This section walks through the steps for deploying StatsD to Kubernetes, configuring the gateway to emit metrics via StatsD, and using [Prometheus](https://prometheus.io/) to monitor the metrics.
@@ -62,7 +62,7 @@ spec:
spec: containers: - name: sputnik-metrics-statsd
- image: prom/statsd-exporter
+ image: mcr.microsoft.com/aks/hcp/prom/statsd-exporter
ports: - name: tcp containerPort: 9102
@@ -77,7 +77,7 @@ spec:
- mountPath: /tmp name: sputnik-metrics-config-files - name: sputnik-metrics-prometheus
- image: prom/prometheus
+ image: mcr.microsoft.com/oss/prometheus/prometheus
ports: - name: tcp containerPort: 9090
app-service https://docs.microsoft.com/en-us/azure/app-service/web-sites-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/web-sites-monitor.md
@@ -81,7 +81,7 @@ For an app, the available metrics are:
| **Current Assemblies** | The current number of Assemblies loaded across all AppDomains in this application. | | **Data In** | The amount of incoming bandwidth consumed by the app, in MiB. | | **Data Out** | The amount of outgoing bandwidth consumed by the app, in MiB. |
-| **File System Usage** | Percentage of filesystem quota consumed by the app. |
+| **File System Usage** | The amount of usage in bytes by storage share. |
| **Gen 0 Garbage Collections** | The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.| | **Gen 1 Garbage Collections** | The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.| | **Gen 2 Garbage Collections** | The number of times the generation 2 objects are garbage collected since the start of the app process.|
attestation https://docs.microsoft.com/en-us/azure/attestation/virtualization-based-security-protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/virtualization-based-security-protocol.md
@@ -52,7 +52,7 @@ Azure Attestation -> Client
**challenge** (BASE64URL(OCTETS)): Random value issued by the service.
-**service_context** (BASE64URL(OCTETS)): Opaque, encrypted context created by the service, which includes, among others, the challenge, and an expiration time for that challenge.
+**service_context** (BASE64URL(OCTETS)): Opaque context created by the service.
### Request message
@@ -233,7 +233,7 @@ TPM + VBS enclave sample:
- **value_type (String)**: Data type of the claimΓÇÖs value
-**service_context** (BASE64URL(OCTETS)): Opaque, encrypted context created by the service which includes, among others, the challenge and an expiration time for that challenge.
+**service_context** (BASE64URL(OCTETS)): Opaque context created by the service.
### Report message
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/agent-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
@@ -1,7 +1,7 @@
Title: Overview of the Connected Machine Windows agent description: This article provides a detailed overview of the Azure Arc enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 01/08/2021 Last updated : 02/03/2021
@@ -111,9 +111,9 @@ Preview agents (version 0.11 and lower) also require access to the following URL
|`agentserviceapi.azure-automation.net`|Guest Configuration| |`*-agentservice-prod-1.azure-automation.net`|Guest Configuration|
-For a list of IP addresses for each service tag/region, see the JSON file - [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. For more information, review [Service tags](../../virtual-network/network-security-groups-overview.md#service-tags).
+For a list of IP addresses for each service tag/region, see the JSON file - [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
-The URLs in the previous table are required in addition to the Service Tag IP address range information because most services do not currently have a Service Tag registration. As such, the IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
+For more information, review [Service tags overview](../../virtual-network/service-tags-overview.md).
### Register Azure resource providers
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compare-azure-government-global-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
@@ -10,7 +10,7 @@ ms.devlang: na
na Previously updated : 01/18/2021 Last updated : 02/03/2021 # Compare Azure Government and global Azure
@@ -132,8 +132,8 @@ The following Language Understanding **features are not currently available** in
### [Speech Service](../cognitive-services/speech-service/overview.md) The following Speech Service **features are not currently available** in Azure Government: - Custom Voice-- Neural voices for Text-to-Speech
-See details of supported locales by features in [Language and region support for the Speech Services](../cognitive-services/speech-service/language-support.md).
+
+See details of supported locales by features in [Language and region support for the Speech Services](../cognitive-services/speech-service/language-support.md). See additonal endpoint information [here](../cognitive-services/Speech-Service/sovereign-clouds.md).
### [Translator](../cognitive-services/translator/translator-info-overview.md) The following Translator **features are not currently available** in Azure Government:
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/codeless-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/codeless-overview.md
@@ -13,16 +13,16 @@
Auto-instrumentation, or codeless attach, allows you to enable application monitoring with Application Insights without changing your code.
-Application Insights is integrated with various resource providers and works on different environments. In essence, all you have to do is enable and - in some cases - configure the agent, which will collect the telemetry out of the box. In no time, you'll see the metrics, data, and dependencies in your Application Insights resource, which will allow you to spot the source of potential problems before they occur, and analyze the root cause with end-to-end transaction view.
+Application Insights is integrated with various resource providers and works on different environments. In essence, all you have to do is enable and - in some cases - configure the agent, which will collect the telemetry automatically box. In no time, you'll see the metrics, data, and dependencies in your Application Insights resource, which will allow you to spot the source of potential problems before they occur, and analyze the root cause with end-to-end transaction view.
## Supported environments, languages, and resource providers
-As we're adding more and more integrations, the auto-instrumentation capability matrix becomes complex. The table below shows you the current state of the matter as far as support for various resource providers, languages, and environments go.
+As we're adding additional integrations, the auto-instrumentation capability matrix becomes complex. The table below shows you the current state of the matter as far as support for various resource providers, languages, and environments go.
|Environment/Resource Provider | .NET | .NET Core | Java | Node.js | Python | ||--|--|--|--|--|
-|Azure App Service on Windows | GA, OnBD* | GA, opt-in | Private Preview | Private Preview | Not supported |
-|Azure App Service on Linux | N/A | Not supported | Private Preview | Public Preview | Not supported |
+|Azure App Service on Windows | GA, OnBD* | GA, opt-in | In progress | In progress | Not supported |
+|Azure App Service on Linux | N/A | Not supported | In progress | Public Preview | Not supported |
|Azure App Service on AKS | N/A | In design | In design | In design | Not supported | |Azure Functions - basic | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | |Azure Functions Windows - dependencies | Not supported | Not supported | Public Preview | Not supported | Not supported |
@@ -37,11 +37,31 @@ As we're adding more and more integrations, the auto-instrumentation capability
### Windows
-[Application monitoring on Azure App Service](./azure-web-apps.md?tabs=net) is available for .NET application and is enabled by default, .NET Core can be enabled with one click, and Java and Node.js are in private preview.
+#### .NET
+Application monitoring on Azure App Service on Windows is available for [.NET applications](./azure-web-apps.md?tabs=net) .NET and is enabled by default.
-### Linux
+#### .NETCore
+Monitoring for [.NETCore applications](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps?tabs=netcore) can be enabled with one click.
-Monitoring of Java and Node.js applications in App Service is in public preview and can be enabled in Azure portal, available in all regions.
+#### Java
+The portal integration for monitoring of Java applications on App Service on Windows is currently unavailable, however, you can add Application Insights [Java 3.0 standalone agent](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent) to your application without any code changes before deploying the apps to App Service. Application Insights Java 3.0 agent is generally available.
+
+#### Node.js
+Monitoring for Node.js applications on Windows cannot currently be enabled from the portal. To monitor Node.js applications, use the [SDK](https://docs.microsoft.com/azure/azure-monitor/app/nodejs).
+
+### Linux
+
+#### .NETCore
+To monitor .NETCore applications running on Linux, use the [SDK](https://docs.microsoft.com/azure/azure-monitor/app/asp-net-core).
+
+#### Java
+Enabling Java application monitoring for App Service on Linux from portal isn't available, but you can add [Application Insights Java 3.0 agent](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent) to your app before deploying the apps to App Service. Application Insights Java 3.0 agent is generally available.
+
+#### Node.js
+[Monitoring Node.js applications in App Service on Linux](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps?tabs=nodejs) is in public preview and can be enabled in Azure portal, available in all regions.
+
+#### Python
+Use the SDK to [monitor your Python app](https://docs.microsoft.com/azure/azure-monitor/app/opencensus-python)
## Azure Functions
@@ -53,7 +73,7 @@ Codeless instrumentation of Azure Kubernetes Service is currently available for
## Azure Windows VMs and virtual machine scale set
-[Auto-instrumentation for Azure VMs and virtual machine scale set](./azure-vm-vmss-apps.md) is available for .NET applications
+Auto-instrumentation for Azure VMs and virtual machine scale set is available for [.NET](./azure-vm-vmss-apps.md) and [Java](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent).
## On-premises servers You can easily enable monitoring for your [on-premises Windows servers for .NET applications](./status-monitor-v2-overview.md) and for [Java apps](./java-in-process-agent.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-automatic-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-automatic-migration.md
@@ -7,7 +7,7 @@
# Understand the automatic migration process for your classic alert rules
-As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired, though still in limited use for resources that do not yet support the new alerts. As part of the retirement process, [a migration tool](alerts-using-migration-tool.md) is available in the Azure portal for customers to trigger migration themselves.
+As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired for public cloud users, though still in limited use for resources that do not yet support the new alerts. As part of the retirement process, [a migration tool](alerts-using-migration-tool.md) is available in the Azure portal for customers to trigger migration themselves.
This article walks you through the automatic migration process and help you resolve any issues you might run into. > [!NOTE]
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-classic-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-classic-portal.md
@@ -9,7 +9,7 @@ Last updated 09/18/2018
# Create, view, and manage classic metric alerts using Azure Monitor > [!WARNING]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md), though still in limited use for resources that do not yet support the new alerts.
+> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use for resources that do not yet support the new alerts.
> Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics cross a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There is an existing newer functionality called Metric alerts which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we will describe how to create, view and manage classic metric alert rules through Azure portal, Azure CLI and Powershell.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-classic.overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-classic.overview.md
@@ -9,7 +9,7 @@
# What are classic alerts in Microsoft Azure? > [!NOTE]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md), though still in limited use for resources that do not yet support the new alerts.
+> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use for resources that do not yet support the new alerts.
> Alerts allow you to configure conditions over data and become notified when the conditions match the latest monitoring data.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-enable-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-enable-template.md
@@ -10,7 +10,7 @@
# Create a classic metric alert with a Resource Manager template > [!WARNING]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md), though still in limited use for resources that do not yet support the new alerts.
+> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use for resources that do not yet support the new alerts.
> This article shows how you can use an [Azure Resource Manager template](../../azure-resource-manager/templates/template-syntax.md) to configure Azure classic metric alerts. This enables you to automatically set up alerts on your resources when they are created to ensure that all resources are monitored correctly.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-prepare-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-prepare-migration.md
@@ -10,7 +10,7 @@
# Prepare your logic apps and runbooks for migration of classic alert rules > [!NOTE]
-> As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired, though still in limited use for resources that do not yet support the new alerts. The retirement date for those alerts has been further extended. A new date will be announced soon.
+> As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired for public cloud users, though still in limited use for resources that do not yet support the new alerts. The retirement date for those alerts has been further extended. A new date will be announced soon.
> If you choose to voluntarily migrate your classic alert rules to new alert rules, be aware that there are some differences between the two systems. This article explains those differences and how you can prepare for the change.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-understand-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-understand-migration.md
@@ -9,7 +9,7 @@
# Understand migration options to newer alerts
-Classic alerts are [retired](./monitoring-classic-retirement.md), though still in limited use for resources that do not yet support the new alerts. A new date will be announced soon for remaining alerts migration, [Azure Government cloud](../../azure-government/documentation-government-welcome.md), and [Azure China 21Vianet](https://docs.azure.cn/).
+Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use for resources that do not yet support the new alerts. A new date will be announced soon for remaining alerts migration, [Azure Government cloud](../../azure-government/documentation-government-welcome.md), and [Azure China 21Vianet](https://docs.azure.cn/).
This article explains how the manual migration and voluntary migration tool work, which will be used to migrate remaining alert rules. It also describes remedies for some common problems.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-using-migration-tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-using-migration-tool.md
@@ -9,7 +9,7 @@
# Use the voluntary migration tool to migrate your classic alert rules
-As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired, though still in limited use for resources that do not yet support the new alerts. A migration tool was available in the Azure portal to customers who used classic alert rules and who want to trigger migration themselves. This article explains how to use the that migration tool, which will also be used to remaining alerts pending further announcement.
+As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired for public cloud users, though still in limited use for resources that do not yet support the new alerts. A migration tool was available in the Azure portal to customers who used classic alert rules and who want to trigger migration themselves. This article explains how to use the that migration tool, which will also be used to remaining alerts pending further announcement.
## Benefits of new alerts
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-webhooks.md
@@ -10,7 +10,7 @@
# Call a webhook with a classic metric alert in Azure Monitor > [!WARNING]
-> This article describes how to use older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md), though still in limited use for resources that do not yet support the new alerts.
+> This article describes how to use older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use for resources that do not yet support the new alerts.
> You can use webhooks to route an Azure alert notification to other systems for post-processing or custom actions. You can use a webhook on an alert to route it to services that send SMS messages, to log bugs, to notify a team via chat or messaging services, or for various other actions.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/monitoring-classic-retirement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/monitoring-classic-retirement.md
@@ -13,7 +13,7 @@
Azure Monitor has now become a unified full stack monitoring service, which now supports ΓÇÿOne MetricΓÇÖ and ΓÇÿOne AlertsΓÇÖ across resources; for more information, see our [blog post on new Azure Monitor](https://azure.microsoft.com/blog/new-full-stack-monitoring-capabilities-in-azure-monitor/).The new Azure monitoring and alerting platforms has been built to be faster, smarter, and extensible ΓÇô keeping pace with the growing expanse of cloud computing and in-line with Microsoft Intelligent Cloud philosophy.
-With the new Azure monitoring and alerting platform in place, classic alerts in Azure Monitor are retired, though still in limited use for resources that do not yet support the new alerts. The retirement date for those alerts has been further extended. A new date will be announced soon for remaining alerts migration, [Azure Government cloud](../../azure-government/documentation-government-welcome.md), and [Azure China 21Vianet](https://docs.azure.cn/).
+With the new Azure monitoring and alerting platform in place, classic alerts in Azure Monitor are retired for public cloud users, though still in limited use for resources that do not yet support the new alerts. The retirement date for those alerts has been further extended. A new date will be announced soon for remaining alerts migration, [Azure Government cloud](../../azure-government/documentation-government-welcome.md), and [Azure China 21Vianet](https://docs.azure.cn/).
![Classic alert in Azure portal](media/monitoring-classic-retirement/monitor-alert-screen2.png) 
@@ -43,7 +43,7 @@ Newer metrics for Azure resources are available as:
## Retirement of Classic monitoring and alerting platform
-As stated earlier, older classic monitoring and alerting are retired; including the closure of related APIs, Azure portal interface, and services in it, though still in limited use for resources that do not yet support the new alerts. Specifically, these features will be deprecated:
+As stated earlier, older classic monitoring and alerting are retired for public cloud users; including the closure of related APIs, Azure portal interface, and services in it, though still in limited use for resources that do not yet support the new alerts. Specifically, these features will be deprecated:
- Older (classic) metrics and alerts for Azure resources as currently available via [Alerts (classic) section](./alerts-classic.overview.md) of Azure portal; accessible as [microsoft.insights/alertrules](/rest/api/monitor/alertrules) resource - Older (classic) platform and custom metrics for Application Insights as well as alerting on them as currently available via [Alerts (classic) section](./alerts-classic.overview.md) of Azure portal and accessible as [microsoft.insights/alertrules](/rest/api/monitor/alertrules) resource
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/operationalinsights-api-retirement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/operationalinsights-api-retirement.md
@@ -63,6 +63,15 @@ Depending on the configuration method you use, you should update the new version
```
+### More information
+If you have questions, get answers from [our tech community experts]( https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor). If you have a support plan and you need technical help, create a [support request]( https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest):
+1. Under *Issue type*, select **Technical**.
+2. Under *Subscription*, select your subscription.
+3. Under *Service*, select **My services**, then select **Log Analytics**.
+4. Under *Summary*, type a description of your issue.
+5. Under *Problem type*, select **Log Analytics workspace management**.
+6. Under *Problem subtype*, select **ARM templates, PowerShell and CLI**.
+ ## Next steps - See the [reference for the OperationalInsights workspace API](/rest/api/loganalytics/workspaces).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/private-link-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/private-link-security.md
@@ -10,7 +10,7 @@
# Use Azure Private Link to securely connect networks to Azure Monitor
-[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. However, Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads. As a result, we have built a resource called an Azure Monitor Private Link Scope (AMPLS) that allows you to define the boundaries of your monitoring network and connect to your virtual network. This article covers when to use and how to set up an Azure Monitor Private Link Scope.
+[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. However, Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads. As a result, we have built a resource called an Azure Monitor Private Link Scope (AMPLS). AMPLS allows you to define the boundaries of your monitoring network and connect to your virtual network. This article covers when to use and how to set up an Azure Monitor Private Link Scope.
## Advantages
@@ -26,65 +26,59 @@ For more information, see [Key Benefits of Private Link](../../private-link/pri
## How it works
-Azure Monitor Private Link Scope is a grouping resource to connect one or more private endpoints (and therefore the virtual networks they are contained in) to one or more Azure Monitor resources. The resources include Log Analytics workspaces and Application Insights components.
+Azure Monitor Private Link Scope (AMPLS) connects private endpoints (and the VNets they're contained in) to one or more Azure Monitor resources - Log Analytics workspaces and Application Insights components.
-![Diagram of resource topology](./media/private-link-security/private-link-topology-1.png)
+![Diagram of basic resource topology](./media/private-link-security/private-link-basic-topology.png)
> [!NOTE] > A single Azure Monitor resource can belong to multiple AMPLSs, but you cannot connect a single VNet to more than one AMPLS.
-## Planning based on your network
+### The issue of DNS overrides
+Log Analytics and Application Insights use global endpoints for some of their services, meaning they serve requests targeting any workspace/component. For example, Application Insights uses a global endpoint for log ingestion, and both Application Insights and Log Analytics use a global endpoint for query requests.
-Before setting up your AMPLS resources, consider your network isolation requirements. Evaluate your virtual networks' access to public internet, and the access restrictions of each of your Azure Monitor resources (that is, Application Insights components and Log Analytics workspaces).
+When you set up a Private Link connection, your DNS is updated to map Azure Monitor endpoints to private IP addresses from your VNet's IP range. This change overrides any previous mapping of these endpoints, which can have meaningful implications, reviewed below.
-> [!NOTE]
-> Hub-spoke networks, or any other topology of peered networks, can setup a Private Link between the hub (main) VNet and the relevant Azure Monitor resources, instead of setting up a Private Link on each and every VNet. This makes sense especially if the Azure Monitor resources used by these networks are shared. However, if you'd like to allow each VNet to access a separate set of monitoring resources, create a Private Link to a dedicated AMPLS for each network.
+## Planning based on your network topology
-### Evaluate which virtual networks should connect to a Private Link
+Before setting up your Azure Monitor Private Link setup, consider your network topology, and specifically your DNS routing topology.
-Start by evaluating which of your virtual networks (VNets) have restricted access to the internet. VNets that have free internet may not require a Private Link to access your Azure Monitor resources. The monitoring resources your VNets connect to may restrict incoming traffic and require a Private Link connection (either for log ingestion or query). In such cases, even a VNet that has access to the public internet needs to connect to these resources over a Private Link, and through an AMPLS.
+### Azure Monitor Private Link applies to all Azure Monitor resources - it's All or Nothing
+Since some Azure Monitor endpoints are global, it's impossible to create a Private Link connection for a specific component or workspace. Instead, when you set up a Private Link to a single Application Insights component, your DNS records are updated for **all** Application Insights component. Any attempt to ingest or query a component will attempt to go through the Private Link, and possibly fail. Similarly, setting up a Private Link to a single workspace will cause all Log Analytics queries to go through the Private Link query endpoint (but not ingestion requests, which have workspace-specific endpoints).
-### Evaluate which Azure Monitor resources should have a Private Link
+![Diagram of DNS overrides in a single VNet](./media/private-link-security/dns-overrides-single-vnet.png)
-Review each of your Azure Monitor resources:
+That's true not only for a specific VNet, but for all VNets that share the same DNS server (see [The issue of DNS overrides](#the-issue-of-dns-overrides)). So, for example, request to ingest logs to any Application Insights component will always be sent through the Private Link route. Components that aren't linked to the AMPLS will fail the Private Link validation and not go through.
-- Should the resource allow ingestion of logs from resources located on specific VNets only?-- Should the resource be queried only by clients located on specific VNETs?
+**Effectively, that means you should connect all Azure Monitor resources in your network to a Private Link (add them to AMPLS), or none of them.**
-If the answer to any of these questions is yes, set the restrictions as explained in [Configuring Log Analytics](#configure-log-analytics) workspaces and [Configuring Application Insights components](#configure-application-insights) and associate these resources to a single or several AMPLS(s). Virtual networks that should access these monitoring resources need to have a Private Endpoint that connects to the relevant AMPLS.
-Remember ΓÇô you can connect the same workspaces or application to multiple AMPLS, to allow them to be reached by different networks.
+### Azure Monitor Private Link applies to your entire network
+Some networks are composed of multiple VNets. If these VNets use the same DNS server, they will override each other's DNS mappings and possibly break each other's communication with Azure Monitor (see [The issue of DNS overrides](#the-issue-of-dns-overrides)). Ultimately, only the last VNet will be able to communicate with Azure Monitor, since the DNS will map Azure Monitor endpoints to private IPs from this VNets range (which may not be reachable from other VNets).
-### Group together monitoring resources by network accessibility
+![Diagram of DNS overrides in multiple VNets](./media/private-link-security/dns-overrides-multiple-vnets.png)
-Since each VNet can connect to only one AMPLS resource, you must group together monitoring resources that should be accessible to the same networks. The simplest way to manage this grouping is to create one AMPLS per VNet, and select the resources to connect to that network. However, to reduce resources and improve manageability, you may want to reuse an AMPLS across networks.
+In the above diagram, VNet 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, VNet 10.0.2.x connects to AMPLS2, and overrides the DNS mapping of the *same global endpoints* with IPs from its range. Since these VNets are not peered, the first VNet now fails to reach these endpoints.
-For example, if your internal virtual networks VNet1 and VNet2 should connect to workspaces Workspace1 and Workspace2 and Application Insights component Application Insights 3, associate all three resources to the same AMPLS. If VNet3 should only access Workspace1, create another AMPLS resource, associate Workspace1 to it, and connect VNet3 as shown in the following diagrams:
+**VNets that use the same DNS should be peered - either directly or through a hub VNet. VNets that aren't peered should also use different DNS server, DNS forwarders, or other mechanism to avoid DNS clashing.**
-![Diagram of AMPLS A topology](./media/private-link-security/ampls-topology-a-1.png)
+### Hub-spoke networks
+Hub-spoke topologies can avoid the issue of DNS overrides by setting a Private Link on the hub (main) VNet, instead of setting up a Private Link for each VNet separately. This setup makes sense especially if the Azure Monitor resources used by the spoke VNets are shared.
-![Diagram of AMPLS B topology](./media/private-link-security/ampls-topology-b-1.png)
+![Hub-and-spoke-single-PE](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png)
-### Consider limits
+> [!NOTE]
+> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but must also verify they don't share the same DNS server in order to avoid DNS overrides.
-There are a number of limits you should consider when planning your Private Link setup:
-* A VNet can only connect to 1 AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to.
-* An Azure Monitor resource (Workspace or Application Insights component) can connect to 5 AMPLSs at most.
-* An AMPLS object can connect to 50 Azure Monitor resources at most.
-* An AMPLS object can connect to 10 Private Endpoints at most.
+### Consider limits
-In the below topology:
+As listed in [Restrictions and limitations](#restrictions-and-limitations), the AMPLS object has a number of limits, depicted in the below topology:
* Each VNet connects to only **1** AMPLS object.
-* AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using 2/10 (20%) of its possible Private Endpoint connections.
-* AMPLS A connects to two workspaces and one Application Insight component, using 3/50 (6%) of its possible Azure Monitor resources connections.
-* Workspace2 connects to AMPLS A and AMPLS B, using 2/5 (40%) of its possible AMPLS connections.
+* AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using 2 of the 10 possible Private Endpoint connections.
+* AMPLS A connects to two workspaces and one Application Insight component, using 3 of the 50 possible Azure Monitor resources connections.
+* Workspace2 connects to AMPLS A and AMPLS B, using 2 of the 5 possible AMPLS connections.
![Diagram of AMPLS limits](./media/private-link-security/ampls-limits.png)
-> [!NOTE]
-> In some network topologies (mainly Hub-spoke) you may quickly reach the 10 VNets limit for a single AMPLS. In such cases it's advised to use a shared private link connection instead of separate ones. Create a single Private Endpoint on the hub network, link it to your AMPLS and peer the relevant networks to the hub network.
-
-![Hub-and-spoke-single-PE](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png)
## Example connection
@@ -94,21 +88,21 @@ Start by creating an Azure Monitor Private Link Scope resource.
![Find Azure Monitor Private Link Scope](./media/private-link-security/ampls-find-1c.png)
-2. Click **create**.
+2. Select **create**.
3. Pick a Subscription and Resource Group.
-4. Give the AMPLS a name. It is best to use a name that is clear what purpose and security boundary the Scope will be used for so that someone won't accidentally break network security boundaries. For example, "AppServerProdTelem".
-5. Click **Review + Create**.
+4. Give the AMPLS a name. It's best to use a meaningful and clear name, such as "AppServerProdTelem".
+5. Select **Review + Create**.
![Create Azure Monitor Private Link Scope](./media/private-link-security/ampls-create-1d.png)
-6. Let the validation pass, and then click **Create**.
+6. Let the validation pass, and then select **Create**.
### Connect Azure Monitor resources Connect Azure Monitor resources (Log Analytics workspaces and Application Insights components) to your AMPLS.
-1. In your Azure Monitor Private Link scope, click on **Azure Monitor Resources** in the left-hand menu. Click the **Add** button.
-2. Add the workspace or component. Clicking the **Add** button brings up a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups, or you can type in their name to filter down to them. Select the workspace or component and click **Apply** to add them to your scope.
+1. In your Azure Monitor Private Link scope, select **Azure Monitor Resources** in the left-hand menu. Select the **Add** button.
+2. Add the workspace or component. Selecting the **Add** button brings up a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups, or you can type in their name to filter down to them. Select the workspace or component and select **Apply** to add them to your scope.
![Screenshot of select a scope UX](./media/private-link-security/ampls-select-2.png)
@@ -119,13 +113,13 @@ Connect Azure Monitor resources (Log Analytics workspaces and Application Insigh
Now that you have resources connected to your AMPLS, create a private endpoint to connect our network. You can do this task in the [Azure portal Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints), or inside your Azure Monitor Private Link Scope, as done in this example.
-1. In your scope resource, click on **Private Endpoint connections** in the left-hand resource menu. Click on **Private Endpoint** to start the endpoint create process. You can also approve connections that were started in the Private Link center here by selecting them and clicking **Approve**.
+1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Private Endpoint** to start the endpoint create process. You can also approve connections that were started in the Private Link center here by selecting them and selecting **Approve**.
![Screenshot of Private Endpoint Connections UX](./media/private-link-security/ampls-select-private-endpoint-connect-3.png) 2. Pick the subscription, resource group, and name of the endpoint, and the region it should live in. The region needs to be the same region as the virtual network you will connect it to.
-3. Click **Next: Resource**.
+3. Select **Next: Resource**.
4. In the Resource screen,
@@ -135,7 +129,7 @@ Now that you have resources connected to your AMPLS, create a private endpoint t
c. From the **resource** drop-down, choose your Private Link scope you created earlier.
- d. Click **Next: Configuration >**.
+ d. Select **Next: Configuration >**.
![Screenshot of select Create Private Endpoint](./media/private-link-security/ampls-select-private-endpoint-create-4.png) 5. On the configuration pane,
@@ -146,31 +140,31 @@ Now that you have resources connected to your AMPLS, create a private endpoint t
> [!NOTE] > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the AMPLS configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Monitor.
- c. Click **Review + create**.
+ c. Select **Review + create**.
d. Let validation pass.
- e. Click **Create**.
+ e. Select **Create**.
![Screenshot of select Create Private Endpoint2](./media/private-link-security/ampls-select-private-endpoint-create-5.png)
-You have now created a new private endpoint that is connected to this Azure Monitor Private Link scope.
+You've now created a new private endpoint that is connected to this AMPLS.
## Configure Log Analytics
-Go to the Azure portal. In your Log Analytics workspace resource there's a menu item **Network Isolation** on the left-hand side. You can control two different states from this menu.
+Go to the Azure portal. In your Log Analytics workspace resource menu, there's an item called **Network Isolation** on the left-hand side. You can control two different states from this menu.
![LA Network Isolation](./media/private-link-security/ampls-log-analytics-lan-network-isolation-6.png) ### Connected Azure Monitor Private Link scopes
-All scopes connected to this workspace show up in this screen. Connecting to scopes (AMPLSs) allows network traffic from the virtual network connected to each AMPLS to reach this workspace. Creating a connection through here has the same effect as setting it up on the scope, as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, click **Add** and select the Azure Monitor Private Link Scope. Click **Apply** to connect it. Note that a workspace can connect to 5 AMPLS objects, as explained in [Consider limits](#consider-limits).
+All scopes connected to this workspace show up in this screen. Connecting to scopes (AMPLSs) allows network traffic from the virtual network connected to each AMPLS to reach this workspace. Creating a connection through here has the same effect as setting it up on the scope, as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Note that a workspace can connect to 5 AMPLS objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
### Access from outside of private links scopes
-The settings on the bottom part of this page control access from public networks, meaning networks not connected through the scopes listed above. If you set **Allow public network access for ingestion** to **No**, then machines outside of the connected scopes cannot upload data to this workspace. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes cannot access data in this workspace, meaning it won't be able to query workspace data. That includes queries in workbooks, dashboards, API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal, and that query Log Analytics data also have to be running within the private-linked VNET.
+The settings on the bottom part of this page control access from public networks, meaning networks not connected through the scopes listed above. Setting **Allow public network access for ingestion** to **No** blocks ingestion of logs from machines outside of the connected scopes. Setting **Allow public network access for queries** to **No** blocks queries coming from machines outside of the scopes. That includes queries run via workbooks, dashboards, API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal, and that query Log Analytics data also have to be running within the private-linked VNET.
### Exceptions Restricting access as explained above doesn't apply to the Azure Resource Manager and therefore has the following limitations:
-* Access to data - while blocking/allowing queries from public networks applies to most Log Analytics experiences, some experiences query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well (feature coming up soon). This includes, for example, Azure Monitor solutions, workbooks and Insights, and the LogicApp connector.
+* Access to data - while blocking/allowing queries from public networks applies to most Log Analytics experiences, some experiences query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well (feature coming up soon). This includes, for example, Azure Monitor solutions, Workbooks and Insights, and the LogicApp connector.
* Workspace management - Workspace setting and configuration changes (including turning these access settings on or off) are managed by Azure Resource Manager. Restrict access to workspace management using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](roles-permissions-security.md). > [!NOTE]
@@ -189,11 +183,11 @@ To allow the Log Analytics Agent to download solution packs, add the appropriate
## Configure Application Insights
-Go to the Azure portal. In your Azure Monitor Application Insights component resource is a menu item **Network Isolation** on the left-hand side. You can control two different states from this menu.
+Go to the Azure portal. In your Azure Monitor Application Insights component resource, is a menu item **Network Isolation** on the left-hand side. You can control two different states from this menu.
![AI Network Isolation](./media/private-link-security/ampls-application-insights-lan-network-isolation-6.png)
-First, you can connect this Application Insights resource to Azure Monitor Private Link scopes that you have access to. Click **Add** and select the **Azure Monitor Private Link Scope**. Click Apply to connect it. All connected scopes show up in this screen. Making this connection allows network traffic in the connected virtual networks to reach this component. Making the connection has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources).
+First, you can connect this Application Insights resource to Azure Monitor Private Link scopes that you have access to. Select **Add** and select the **Azure Monitor Private Link Scope**. Select Apply to connect it. All connected scopes show up in this screen. Making this connection allows network traffic in the connected virtual networks to reach this component, and has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources).
Second, you can control how this resource can be reached from outside of the private link scopes listed previously. If you set **Allow public network access for ingestion** to **No**, then machines or SDKs outside of the connected scopes cannot upload data to this component. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes cannot access data in this Application Insights resource. That data includes access to APM logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more.
@@ -210,13 +204,13 @@ Restricting access in this manner only applies to data in the Application Insigh
## Use APIs and command line
-You can automate the process described earlier using Azure Resource Manager templates, REST and command-line interfaces.
+You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces.
To create and manage private link scopes, use the [REST API](/rest/api/monitor/private%20link%20scopes%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope). To manage network access, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
-## Collect custom logs over Private Link
+## Collect custom logs and IIS log over Private Link
Storage accounts are used in the ingestion process of custom logs. By default, service-managed storage accounts are used. However to ingest custom logs on private links, you must use your own storage accounts and associate them with Log Analytics workspace(s). See more details on how to set up such accounts using the [command line](/cli/azure/monitor/log-analytics/workspace/linked-storage).
@@ -224,6 +218,16 @@ For more information on bringing your own storage account, see [Customer-owned s
## Restrictions and limitations
+### AMPLS
+The AMPLS object has a number of limits you should consider when planning your Private Link setup:
+
+* A VNet can only connect to 1 AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to.
+* An Azure Monitor resource (Workspace or Application Insights component) can connect to 5 AMPLSs at most.
+* An AMPLS object can connect to 50 Azure Monitor resources at most.
+* An AMPLS object can connect to 10 Private Endpoints at most.
+
+See [Consider limits](#consider-limits) for a deeper review of these limits and how to plan your Private Link setup accordingly.
+ ### Agents The latest versions of the Windows and Linux agents must be used on private networks to enable secure ingestion to Log Analytics workspaces. Older versions cannot upload monitoring data in a private network.
@@ -257,8 +261,8 @@ Bundle the JavaScript code in your script so that the browser does not attempt t
### Browser DNS settings
-If you're connecting to your Azure Monitor resources over a Private Link, traffic to these resource must go through the private endpoint that is configured on your network. To enable the private endpoint, update your DNS settings as explained in [Connect to a private endpoint](#connect-to-a-private-endpoint). Some browsers use their own DNS settings instead of the ones you set. The browser might attempt to connect to Azure Monitor public endpoints and bypass the Private Link entirely. Verify that your browsers settings don't override or cache old DNS settings.
+If you're connecting to your Azure Monitor resources over a Private Link, traffic to these resources must go through the private endpoint that is configured on your network. To enable the private endpoint, update your DNS settings as explained in [Connect to a private endpoint](#connect-to-a-private-endpoint). Some browsers use their own DNS settings instead of the ones you set. The browser might attempt to connect to Azure Monitor public endpoints and bypass the Private Link entirely. Verify that your browsers settings don't override or cache old DNS settings.
## Next steps -- Learn about [private storage](private-storage.md)
+- Learn about [private storage](private-storage.md)
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-solution-architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
@@ -13,7 +13,7 @@
na ms.devlang: na Previously updated : 01/22/2021 Last updated : 02/03/2021 # Solution architectures using Azure NetApp Files
@@ -68,6 +68,7 @@ This section provides references to SAP on Azure solutions.
### SAP HANA * [SAP HANA Azure virtual machine storage configurations](../virtual-machines/workloads/sap/hana-vm-operations-storage.md)
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md)
* [High availability of SAP HANA Scale-up with Azure NetApp Files on Red Hat Enterprise Linux](../virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md) * [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server](../virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md) * [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on Red Hat Enterprise Linux](../virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-manager-health-check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-manager-health-check.md
@@ -14,7 +14,7 @@
## Health monitoring providers
-In order to make health integration as easy as possible, Microsoft has been working with some of the top service health monitoring companies to provide you with a simple copy/paste solution to integrate health checks with your deployments. If youΓÇÖre not already using a health monitor, these are great solutions to start with:
+In order to make health integration as easy as possible, Microsoft has been working with some of the top service health monitoring companies to provide you with a simple copy/paste solution to integrate health checks with your deployments. If you're not already using a health monitor, these are great solutions to start with:
| ![azure deployment manager health monitor provider azure monitor](./media/deployment-manager-health-check/azure-deployment-manager-health-monitor-provider-azure-monitor.svg)| ![azure deployment manager health monitor provider datadog](./media/deployment-manager-health-check/azure-deployment-manager-health-monitor-provider-datadog.svg) | ![azure deployment manager health monitor provider site24x7](./media/deployment-manager-health-check/azure-deployment-manager-health-monitor-provider-site24x7.svg) | ![azure deployment manager health monitor provider wavefront](./media/deployment-manager-health-check/azure-deployment-manager-health-monitor-provider-wavefront.svg) | |--|--|||
@@ -24,20 +24,20 @@ In order to make health integration as easy as possible, Microsoft has been work
[Health monitoring providers](#health-monitoring-providers) offer several mechanisms for monitoring services and alerting you of any service health issues. [Azure Monitor](../../azure-monitor/overview.md) is an example of one such offering. Azure Monitor can be used to create alerts when certain thresholds are exceeded. For example, your memory and CPU utilization spike beyond expected levels when you deploy a new update to your service. When notified, you can take corrective actions.
-These health providers typically offer REST APIs so that the status of your serviceΓÇÖs monitors can be examined programmatically. The REST APIs can either come back with a simple healthy/unhealthy signal (determined by the HTTP response code), and/or with detailed information about the signals it is receiving.
+These health providers typically offer REST APIs so that the status of your service's monitors can be examined programmatically. The REST APIs can either come back with a simple healthy/unhealthy signal (determined by the HTTP response code), and/or with detailed information about the signals it is receiving.
-The new *healthCheck* step in Azure Deployment Manager allows you to declare HTTP codes that indicate a healthy service, or, for more complex REST results, you can even specify regular expressions that, if they match, indicate a healthy response.
+The new `healthCheck` step in Azure Deployment Manager allows you to declare HTTP codes that indicate a healthy service. For complex REST results you can specify regular expressions that, when matched, indicate a healthy response.
-The flow to getting setup with Azure Deployment Manager health checks:
+The flow to set up Azure Deployment Manager health checks:
1. Create your health monitors via a health service provider of your choice.
-1. Create one or more healthCheck steps as part of your Azure Deployment Manager rollout. Fill out the healthCheck steps with the following information:
+1. Create one or more `healthCheck` steps as part of your Azure Deployment Manager rollout. Fill out the `healthCheck` steps with the following information:
1. The URI for the REST API for your health monitors (as defined by your health service provider).
- 1. Authentication information. Currently only API-key style authentication is supported. For Azure Monitor, the authentication type should be set as ΓÇô ΓÇ£RolloutIdentityΓÇ¥ as the user assigned managed identity used for Azure Deployment Manager Rollout extends for Azure Monitor.
- 1. [HTTP status codes](https://www.wikipedia.org/wiki/List_of_HTTP_status_codes) or regular expressions that define a healthy response. Note that you may provide regular expressions, which ALL must match for the response to be considered healthy, or you may provide expressions of which ANY must match for the response to be considered healthy. Both methods are supported.
+ 1. Authentication information. Currently only API-key style authentication is supported. For Azure Monitor, the authentication type should be set as `RolloutIdentity` as the user-assigned managed identity used for Azure Deployment Manager rollout extends for Azure Monitor.
+ 1. [HTTP status codes](https://www.wikipedia.org/wiki/List_of_HTTP_status_codes) or regular expressions that define a healthy response. You may provide regular expressions, which ALL must match for the response to be considered healthy, or you may provide expressions of which ANY must match for the response to be considered healthy. Both methods are supported.
- The following Json is an example for integrating Azure Monitor with Azure Deployment Manager that leverages RolloutIdentity and establishes health check wherein a Rollout proceeds if there are no alerts. The only supported Azure Monitor API: [Alerts ΓÇô Get All](/rest/api/monitor/alertsmanagement/alerts/getall).
+ The following JSON is an example to integrate Azure Monitor with Azure Deployment Manager. The example uses `RolloutIdentity` and establishes a health check where a rollout proceeds if there are no alerts. The only supported Azure Monitor API: [Alerts ΓÇô Get All](/rest/api/monitor/alertsmanagement/alerts/getall).
```json {
@@ -82,7 +82,7 @@ The flow to getting setup with Azure Deployment Manager health checks:
} ```
- The following Json is an example for all other health monitoring providers:
+ The following JSON is an example for all other health monitoring providers:
```json {
@@ -131,7 +131,7 @@ The flow to getting setup with Azure Deployment Manager health checks:
}, ```
-1. Invoke the healthCheck steps at the appropriate time in your Azure Deployment Manager rollout. In the following example, a health check step is invoked in **postDeploymentSteps** of **stepGroup2**.
+1. Invoke the `healthCheck` steps at the appropriate time in your Azure Deployment Manager rollout. In the following example, a `healthCheck` step is invoked in `postDeploymentSteps` of `stepGroup2`.
```json "stepGroups": [
@@ -169,33 +169,35 @@ The flow to getting setup with Azure Deployment Manager health checks:
] ```
-To walk through an example, see [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-health-check.md).
+To walk through an example, see [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
## Phases of a health check
-At this point Azure Deployment Manager knows how to query for the health of your service and at what phases in your rollout to do so. However, Azure Deployment Manager also allows for deep configuration of the timing of these checks. A healthCheck step is executed in 3 sequential phases, all of which have configurable durations:
+At this point Azure Deployment Manager knows how to query for the health of your service and at which phases in your rollout to do so. However, Azure Deployment Manager also allows for deep configuration of the timing of these checks. A `healthCheck` step is executed in three sequential phases, all of which have configurable durations:
1. Wait
- 1. After a deployment operation is completed, VMs may be rebooting, reconfiguring based on new data, or even being started for the first time. It also takes time for services to start emitting health signals to be aggregated by the health monitoring provider into something useful. During this tumultuous process, it may not make sense to check for service health since the update has not yet reached a steady state. Indeed, the service may be oscillating between healthy and unhealthy states as the resources settle.
- 1. During the Wait phase, service health is not monitored. This is used to allow the deployed resources the time to bake before beginning the health check process.
+ 1. After a deployment operation is completed, VMs may be rebooting, reconfiguring based on new data, or even being started for the first time. It also takes time for services to start emitting health signals to be aggregated by the health monitoring provider into something useful. During this tumultuous process, it may not make sense to check for service health since the update hasn't yet reached a steady state. Indeed, the service may be oscillating between healthy and unhealthy states as the resources settle.
+ 1. During the Wait phase, service health isn't monitored. This is used to allow the deployed resources the time to bake before beginning the health check process.
+ 1. Elastic
- 1. Since it is impossible to know in all cases how long resources will take to bake before they become stable, the Elastic phase allows for a flexible time period between when the resources are potentially unstable and when they are required to maintain a healthy steady state.
+ 1. Since it's impossible to know in all cases how long it will take before resources become stable, the Elastic phase allows for a flexible time period between when the resources are potentially unstable and when they are required to maintain a healthy steady state.
1. When the Elastic phase begins, Azure Deployment Manager begins polling the provided REST endpoint for service health periodically. The polling interval is configurable. 1. If the health monitor comes back with signals indicating that the service is unhealthy, these signals are ignored, the Elastic phase continues, and polling continues.
- 1. As soon as the health monitor comes back with signals indicating that the service is healthy, the Elastic phase ends and the HealthyState phase begins.
+ 1. When the health monitor returns signals indicating that the service is healthy, the Elastic phase ends and the HealthyState phase begins.
1. Thus, the duration specified for the Elastic phase is the maximum amount of time that can be spent polling for service health before a healthy response is considered mandatory.+ 1. HealthyState 1. During the HealthyState phase, service health is continually polled at the same interval as the Elastic phase. 1. The service is expected to maintain healthy signals from the health monitoring provider for the entire specified duration. 1. If at any point an unhealthy response is detected, Azure Deployment Manager will stop the entire rollout and return the REST response carrying the unhealthy service signals.
- 1. Once the HealthyState duration has ended, the healthCheck is complete, and deployment continues to the next step.
+ 1. After the HealthyState duration has ended, the `healthCheck` is complete, and deployment continues to the next step.
## Next steps In this article, you learned about how to integrate health monitoring in Azure Deployment Manager. Proceed to the next article to learn how to deploy with Deployment Manager. > [!div class="nextstepaction"]
-> [Tutorial: integrate health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md)
+> [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-manager-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-manager-overview.md
@@ -5,6 +5,7 @@
Last updated 11/21/2019 + # Enable safe deployment practices with Azure Deployment Manager (Public preview) To deploy your service across many regions and make sure it's running as expected in each region, you can use Azure Deployment Manager to coordinate a staged rollout of the service. Just as you would for any Azure deployment, you define the resources for your service in [Resource Manager templates](template-syntax.md). After creating the templates, you use Deployment Manager to describe the topology for your service and how it should be rolled out.
@@ -24,10 +25,10 @@ You deploy the topology template before deploying the rollout template.
Additional resources: -- The [Azure Deployment Manager REST API reference](/rest/api/deploymentmanager/).-- [Tutorial: Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md).-- [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).-- [An Azure Deployment Manager sample](https://github.com/Azure-Samples/adm-quickstart).
+* [Azure Deployment Manager REST API reference](/rest/api/deploymentmanager/).
+* [Tutorial: Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md).
+* [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
+* [Azure Deployment Manager sample](https://github.com/Azure-Samples/adm-quickstart).
## Identity and access
@@ -43,10 +44,10 @@ The topology template describes the Azure resources that make up your service an
The topology template includes the following resources:
-* Artifact source - where your Resource Manager templates and parameters are stored
-* Service topology - points to artifact source
- * Services - specifies location and Azure subscription ID
- * Service units - specifies resource group, deployment mode, and path to template and parameter file
+* Artifact source - where your Resource Manager templates and parameters are stored.
+* Service topology - points to artifact source.
+ * Services - specifies location and Azure subscription ID.
+ * Service units - specifies resource group, deployment mode, and path to template and parameter files.
To understand what happens at each level, it's helpful to see which values you provide.
@@ -81,7 +82,7 @@ For more information, see [artifactSources template reference](/azure/templates/
### Service topology
-The following example shows the general format of the service topology resource. You provide the resource ID of the artifact source that holds the templates and parameter files. The service topology includes all service resources. To make sure the artifact source is available, the service topology depends on it.
+The following example shows the general format of the service topology resource. You provide the resource ID of the artifact source that holds the templates and parameter files. The service topology includes all service resources. Make sure the artifact source is available because the service topology depends on it.
```json {
@@ -169,11 +170,11 @@ For more information, see [serviceUnits template reference](/azure/templates/Mic
The rollout template describes the steps to take when deploying your service. You specify the service topology to use and define the order for deploying service units. It includes an artifact source for storing binaries for the deployment. In your rollout template, you define the following hierarchy:
-* Artifact source
-* Step
-* Rollout
- * Step groups
- * Deployment operations
+* Artifact source.
+* Step.
+* Rollout.
+ * Step groups.
+ * Deployment operations.
The following image shows the hierarchy of the rollout template:
@@ -187,9 +188,9 @@ In the rollout template, you create an artifact source for the binaries you need
### Steps
-You can define a step to perform either before or after your deployment operation. Currently, only the `wait` step and the 'healthCheck' step are available.
+You can define a step to perform either before or after your deployment operation. Currently, only the `wait` step and the `healthCheck` step are available.
-The wait step pauses the deployment before continuing. It allows you to verify that your service is running as expected before deploying the next service unit. The following example shows the general format of a wait step.
+The `wait` step pauses the deployment before continuing. It allows you to verify that your service is running as expected before deploying the next service unit. The following example shows the general format of a `wait` step.
```json {
@@ -208,13 +209,13 @@ The wait step pauses the deployment before continuing. It allows you to verify t
The duration property uses [ISO 8601 standard](https://en.wikipedia.org/wiki/ISO_8601#Durations). The preceding example specifies a one-minute wait.
-For more information about the health check step, see [Introduce health integration rollout to Azure Deployment Manager](./deployment-manager-health-check.md) and [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
+For more information about health checks, see [Introduce health integration rollout to Azure Deployment Manager](./deployment-manager-health-check.md) and [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
For more information, see [steps template reference](/azure/templates/Microsoft.DeploymentManager/steps). ### Rollouts
-To make sure the artifact source is available, the rollout depends on it. The rollout defines steps groups for each service unit that is deployed. You can define actions to take before or after deployment. For example, you can specify that the deployment wait after the service unit has been deployed. You can define the order of the step groups.
+Make sure the artifact source is available because the rollout depends on it. The rollout defines steps groups for each service unit that is deployed. You can define actions to take before or after deployment. For example, you can specify the deployment to wait after the service unit has been deployed. You can define the order of the step groups.
The identity object specifies the [user-assigned managed identity](#identity-and-access) that performs the deployment actions.
@@ -264,7 +265,7 @@ You create two parameter files. One parameter file is used when deploying the se
With versioned deployments, the path to your artifacts changes with each new version. The first time you run a deployment the path might be `https://<base-uri-blob-container>/binaries/1.0.0.0`. The second time it might be `https://<base-uri-blob-container>/binaries/1.0.0.1`. Deployment Manager simplifies getting the correct root path for the current deployment by using the `$containerRoot` variable. This value changes with each version and isn't known before deployment.
-Use the `$containerRoot` variable in the parameter file for template to deploy the Azure resources. At deployment time, this variable is replaced with the actual values from the rollout.
+Use the `$containerRoot` variable in the parameter file for the template that deploys the Azure resources. At deployment time, this variable is replaced with the actual values from the rollout.
For example, during rollout you create an artifact source for the binary artifacts.
@@ -290,7 +291,7 @@ For example, during rollout you create an artifact source for the binary artifac
Notice the `artifactRoot` and `sasUri` properties. The artifact root might be set to a value like `binaries/1.0.0.0`. The SAS URI is the URI to your storage container with a SAS token for access. Deployment Manager automatically constructs the value of the `$containerRoot` variable. It combines those values in the format `<container>/<artifactRoot>`.
-Your template and parameter file need to know the correct path for getting the versioned binaries. For example, to deploy files for a web app, create the following parameter file with the $containerRoot variable. You must use two backslashes (`\\`) for the path because the first is an escape character.
+Your template and parameter file need to know the correct path for getting the versioned binaries. For example, to deploy files for a web app, create the following parameter file with the `$containerRoot` variable. You must use two backslashes (`\\`) for the path because the first is an escape character.
```json {
@@ -324,7 +325,7 @@ Then, use that parameter in your template:
} ```
-You manage versioned deployments by creating new folders and passing in that root during rollout. The path flows through to the template that deploys the resources.
+You manage versioned deployments by creating new folders and passing in that root path during rollout. The path flows through to the template that deploys the resources.
## Next steps
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/error-not-found https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/error-not-found.md
@@ -99,7 +99,7 @@ In the reference function, use `Full` to get all of the properties including the
The pattern is:
-`"[reference(resourceId(<resource-provider-namespace>, <resource-name>, <API-version>, 'Full').Identity.propertyName]"`
+`"[reference(resourceId(<resource-provider-namespace>, <resource-name>), <API-version>, 'Full').Identity.propertyName]"`
> [!IMPORTANT] > Don't use the pattern:
@@ -126,4 +126,4 @@ When deploying a template, look for expressions that use the [reference](templat
```json "[reference(resourceId('exampleResourceGroup', 'Microsoft.Storage/storageAccounts', 'myStorage'), '2017-06-01')]"
-```
+```
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/auditing-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
@@ -8,7 +8,7 @@
Previously updated : 11/08/2020 Last updated : 02/03/2021 # Auditing for Azure SQL Database and Azure Synapse Analytics
@@ -268,6 +268,11 @@ Extended policy with WHERE clause support for additional filtering:
- [Get Database *Extended* Auditing Policy](/rest/api/sql/database%20extended%20auditing%20settings/get) - [Get Server *Extended* Auditing Policy](/rest/api/sql/server%20auditing%20settings/get)
+### Using Azure CLI
+
+- [Manage a server's auditing policy](/cli/azure/sql/server/audit-policy?view=azure-cli-latest)
+- [Manage a database's auditing policy](/cli/azure/sql/db/audit-policy?view=azure-cli-latest)
+ ### Using Azure Resource Manager templates You can manage Azure SQL Database auditing using [Azure Resource Manager](../../azure-resource-manager/management/overview.md) templates, as shown in these examples:
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/azure-defender-for-sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/azure-defender-for-sql.md
@@ -10,16 +10,14 @@
- Previously updated : 12/01/2020 Last updated : 02/02/2021 # Azure Defender for SQL [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)] - Azure Defender for SQL is a unified package for advanced SQL security capabilities. Azure Defender is available for Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. It includes functionality for discovering and classifying sensitive data, surfacing and mitigating potential database vulnerabilities, and detecting anomalous activities that could indicate a threat to your database. It provides a single go-to location for enabling and managing these capabilities.
-## Overview
+## What are the benefits of Azure Defender for SQL?
Azure Defender provides a set of advanced SQL security capabilities, including SQL Vulnerability Assessment and Advanced Threat Protection. - [Vulnerability Assessment](sql-vulnerability-assessment.md) is an easy-to-configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security state, and it includes actionable steps to resolve security issues and enhance your database fortifications.
@@ -29,10 +27,6 @@ Enable Azure Defender for SQL once to enable all these included features. With o
For more information about Azure Defender for SQL pricing, see the [Azure Security Center pricing page](https://azure.microsoft.com/pricing/details/security-center/).
-## Getting started with Azure Defender
-
-The following steps get you started with Azure Defender.
- ## Enable Azure Defender Azure Defender can be accessed through the [Azure portal](https://portal.azure.com). Enable Azure Defender by navigating to **Security Center** under the **Security** heading for your server or managed instance.
@@ -42,27 +36,28 @@ Azure Defender can be accessed through the [Azure portal](https://portal.azure.c
> > The cost of Azure Defender is aligned with Azure Security Center standard tier pricing per node, where a node is the entire server or managed instance. You are thus paying only once for protecting all databases on the server or managed instance with Azure Defender. You can try Azure Defender out initially with a free trial.
-## Start tracking vulnerabilities and investigating threat alerts
+## Track vulnerabilities and investigate threat alerts
Click the **Vulnerability Assessment** card to view and manage vulnerability scans and reports, and to track your security stature. If security alerts have been received, click the **Advanced Threat Protection** card to view details of the alerts and to see a consolidated report on all alerts in your Azure subscription via the Azure Security Center security alerts page. ## Manage Azure Defender settings
-To view and manage Azure Defender settings, navigate to **Security Center** under the **Security** heading for your server or managed instance. On this page, you can enable or disable Azure Defender, and modify vulnerability assessment and Advanced Threat Protection settings for your entire server or managed instance.
+To view and manage Azure Defender settings:
+
+1. From the **Security** area of your server or managed instance, select **Security Center**.
+ On this page, you'll see the status of Azure Defender for SQL:
-## Manage Azure Defender settings for a database
+ :::image type="content" source="media/azure-defender-for-sql/status-of-defender-for-sql.png" alt-text="Checking the status of Azure Defender for SQL inside Azure SQL databases":::
-To override Azure Defender settings for a particular database, check the **Enable Azure Defender for SQL at the database level** checkbox in your database **Security Center** settings. Use this option only if you have a particular requirement to receive separate Advanced Threat Protection alerts or vulnerability assessment results for the individual database, in place of or in addition to the alerts and results received for all databases on the server or managed instance.
+1. If Azure Defender for SQL is enabled, you'll see a **Configure** link as shown in the previous graphic. To edit the settings for Azure Defender for SQL, select **Configure**.
-Once the checkbox is selected, you can then configure the relevant settings for this database.
+ :::image type="content" source="media/azure-defender-for-sql/security-server-settings.png" alt-text="security server settings":::
+1. Make the necessary changes and select **Save**.
-Azure Defender for SQL settings for your server or managed instance can also be reached from the Azure Defender database pane. Click **Settings** in the main Security Center pane, and then click **View Azure Defender for SQL server settings**.
## Next steps
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale-frequently-asked-questions-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale-frequently-asked-questions-faq.md
@@ -10,7 +10,7 @@
Previously updated : 03/03/2020 Last updated : 02/03/2021 # Azure SQL Database Hyperscale FAQ [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
@@ -160,7 +160,7 @@ Your database size automatically grows as you insert/ingest more data.
### What is the smallest database size that Hyperscale supports or starts with
-40 GB. A Hyperscale database is created with a starting size of 10 GB. Then, it starts growing by 10 GB every 10 minutes, until it reaches the size of 40 GB. Each of these 10 GB chucks is allocated in a different page server in order to provide more IOPS and higher I/O parallelism. Because of this optimization, even if you choose initial database size smaller than 40 GB, the database will grow to at least 40 GB automatically.
+40 GB. A Hyperscale database is created with a starting size of 10 GB. Then, it starts growing by 10 GB every 10 minutes, until it reaches the size of 40 GB. Each of these 10 GB chunks is allocated in a different page server in order to provide more IOPS and higher I/O parallelism. Because of this optimization, even if you choose initial database size smaller than 40 GB, the database will grow to at least 40 GB automatically.
### In what increments does my database size grow
@@ -227,7 +227,7 @@ Hyperscale is capable of consuming 100 MB/s of new/changed data, but the time ne
You can have a client application read data from Azure Storage and load data load into a Hyperscale database (just like you can with any other database in Azure SQL Database). Polybase is currently not supported in Azure SQL Database. As an alternative to provide fast load, you can use [Azure Data Factory](../../data-factory/index.yml), or use a Spark job in [Azure Databricks](/azure/azure-databricks/) with the [Spark connector for SQL](spark-connector.md). The Spark connector to SQL supports bulk insert.
-It is also possible to bulk read data from Azure Blob store using BULK INSERT or OPENROWSET: [Examples of Bulk Access to Data in Azure Blob Storage](/sql/relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage?view=sql-server-2017#accessing-data-in-a-csv-file-referencing-an-azure-blob-storage-location).
+It is also possible to bulk read data from Azure Blob store using BULK INSERT or OPENROWSET: [Examples of Bulk Access to Data in Azure Blob Storage](/sql/relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage#accessing-data-in-a-csv-file-referencing-an-azure-blob-storage-location).
Simple recovery or bulk logging model is not supported in Hyperscale. Full recovery model is required to provide high availability and point-in-time recovery. However, Hyperscale log architecture provides better data ingest rate compared to other Azure SQL Database service tiers.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/failover-group-add-instance-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/failover-group-add-instance-tutorial.md
@@ -26,7 +26,7 @@ Add managed instances of Azure SQL Managed Instance to a failover group. In this
> [!NOTE] > - When going through this tutorial, ensure you are configuring your resources with the [prerequisites for setting up failover groups for SQL Managed Instance](../database/auto-failover-group-overview.md#enabling-geo-replication-between-managed-instances-and-their-vnets). > - Creating a managed instance can take a significant amount of time. As a result, this tutorial could take several hours to complete. For more information on provisioning times, see [SQL Managed Instance management operations](sql-managed-instance-paas-overview.md#management-operations).
- > - Managed instances participating in a failover group require either [Azure ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) or two connected VPN gateways. Global VNet Peering is not supported. This tutorial provides steps for creating and connecting the VPN gateways. Skip these steps if you already have ExpressRoute configured.
+ > - Managed instances participating in a failover group require [Azure ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), global VNet peering, or two connected VPN gateways. This tutorial provides steps for creating and connecting the VPN gateways. Skip these steps if you already have ExpressRoute configured.
## Prerequisites
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/point-in-time-restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/point-in-time-restore.md
@@ -156,7 +156,7 @@ $targetDatabaseName = "<target database name>"
$deletedDatabase = Get-AzSqlDeletedInstanceDatabaseBackup -ResourceGroupName $resourceGroupName ` -InstanceName $managedInstanceName -DatabaseName $deletedDatabaseName
-Restore-AzSqlinstanceDatabase -Name $deletedDatabase.Name `
+Restore-AzSqlinstanceDatabase -FromPointInTimeBackup -Name $deletedDatabase.Name `
-InstanceName $deletedDatabase.ManagedInstanceName ` -ResourceGroupName $deletedDatabase.ResourceGroupName ` -DeletionDate $deletedDatabase.DeletionDate `
@@ -170,7 +170,7 @@ To restore the database to another SQL Managed Instance, also specify the names
$targetResourceGroupName = "<Resource group of target SQL Managed Instance>" $targetInstanceName = "<Target SQL Managed Instance name>"
-Restore-AzSqlinstanceDatabase -Name $deletedDatabase.Name `
+Restore-AzSqlinstanceDatabase -FromPointInTimeBackup -Name $deletedDatabase.Name `
-InstanceName $deletedDatabase.ManagedInstanceName ` -ResourceGroupName $deletedDatabase.ResourceGroupName ` -DeletionDate $deletedDatabase.DeletionDate `
@@ -247,4 +247,4 @@ Use one of the following methods to connect to your database in SQL Managed Inst
## Next steps
-Learn about [automated backups](../database/automated-backups-overview.md).
+Learn about [automated backups](../database/automated-backups-overview.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
@@ -483,6 +483,7 @@ Cross-instance service broker isn't supported:
- `remote access` - `remote data archive` - `remote proc trans`
+ - `scan for startup procs`
- `sp_execute_external_scripts` isn't supported. See [sp_execute_external_scripts](/sql/relational-databases/system-stored-procedures/sp-execute-external-script-transact-sql#examples). - `xp_cmdshell` isn't supported. See [xp_cmdshell](/sql/relational-databases/system-stored-procedures/xp-cmdshell-transact-sql). - `Extended stored procedures` aren't supported, which includes `sp_addextendedproc` and `sp_dropextendedproc`. See [Extended stored procedures](/sql/relational-databases/system-stored-procedures/general-extended-stored-procedures-transact-sql).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-manually-configure-multiple-regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-multiple-regions.md
@@ -84,6 +84,7 @@ To create a replica in a remote data center, do the following steps:
- Use a TCP port probe specific to the IP address. - Have a load balancing rule specific to the SQL Server in the same region. - Be a Standard Load Balancer if the virtual machines in the backend pool are not part of either a single availability set or virtual machine scale set. For additional information review [Azure Load Balancer Standard overview](../../../load-balancer/load-balancer-overview.md).
+ - Be a Standard Load Balancer if the two virtual networks in two different regions are peered over global VNet peering. For more information, see [Azure Virtual Network frequently asked questions (FAQ)](../../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers).
1. [Add Failover Clustering feature to the new SQL Server](availability-group-manually-configure-prerequisites-tutorial.md#add-failover-clustering-features-to-both-sql-server-vms).
@@ -201,4 +202,4 @@ For more information, see the following topics:
* [Always On Availability Groups](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server) * [Azure Virtual Machines](../../../virtual-machines/windows/index.yml) * [Azure Load Balancers](availability-group-manually-configure-tutorial.md#configure-internal-load-balancer)
-* [Azure Availability Sets](../../../virtual-machines/manage-availability.md)
+* [Azure Availability Sets](../../../virtual-machines/manage-availability.md)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-azure-shared-disks-manually-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-azure-shared-disks-manually-configure.md
@@ -29,7 +29,7 @@ To learn more, see an overview of [FCI with SQL Server on Azure VMs](failover-cl
Before you complete the instructions in this article, you should already have: - An Azure subscription. Get started for [free](https://azure.microsoft.com/free/). -- [Two or more Windows Azure virtual machines](failover-cluster-instance-prepare-vm.md). [Availability sets](../../../virtual-machines/windows/tutorial-availability-sets.md) and [proximity placement groups](../../../virtual-machines/co-location.md#proximity-placement-groups) (PPGs) supported for Premium SSD and [availability zones](../../../virtual-machines/windows/create-portal-availability-zone.md#confirm-zone-for-managed-disk-and-ip-address) are supported for Ultra Disks. If you use a PPG, all nodes must exist in the same group.
+- [Two or more Windows Azure virtual machines](failover-cluster-instance-prepare-vm.md). [Availability sets](../../../virtual-machines/windows/tutorial-availability-sets.md) and [proximity placement groups](../../../virtual-machines/co-location.md#proximity-placement-groups) (PPGs) supported for Premium SSD and [availability zones](../../../virtual-machines/windows/create-portal-availability-zone.md#confirm-zone-for-managed-disk-and-ip-address) are supported for Ultra Disks. All nodes must exist in the same [proximity placement group](../../../virtual-machines/co-location.md#proximity-placement-groups).
- An account that has permissions to create objects on both Azure virtual machines and in Active Directory. - The latest version of [PowerShell](/powershell/azure/install-az-ps).
@@ -218,4 +218,4 @@ To learn more, see an overview of [FCI with SQL Server on Azure VMs](failover-cl
For more information, see: - [Windows cluster technologies](/windows-server/failover-clustering/failover-clustering-overview) -- [SQL Server failover cluster instances](/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server)
+- [SQL Server failover cluster instances](/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server)
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
@@ -2,7 +2,7 @@
Title: Concepts - Identity and access description: Learn about the identity and access concepts of Azure VMware Solution Previously updated : 11/11/2020 Last updated : 02/02/2021 # Azure VMware Solution identity concepts
@@ -48,7 +48,11 @@ Use the "administrator" account to access NSX-T Manager. It has full privileges
## Next steps
-The next step is to learn about [private cloud upgrade concepts][concepts-upgrades].
+Now that you've covered Azure VMware Solution access and identity concepts, you may want to learn about:
+
+- [Private cloud upgrade concepts](concepts-upgrades.md).
+- [vSphere role-based access control for Azure VMware Solution](concepts-role-based-access-control.md).
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
<!-- LINKS - external -->
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-monitor-repair-private-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-monitor-repair-private-cloud.md
@@ -3,7 +3,7 @@ Title: Concepts - Monitor and repair Azure VMware Solution private clouds
description: Learn how Azure VMware Solution monitors and repairs VMware ESXi servers on an Azure VMware Solution private cloud. Previously updated : 11/20/2020 Last updated : 02/03/2021 # Monitor and repair Azure VMware Solution private clouds
@@ -38,10 +38,7 @@ The host remediation process starts by adding a new healthy node in the cluster.
## Next steps
-Here are some topics you may want to learn more about:
+Now that you've covered how Azure VMware Solution monitors and repairs private clouds, you may want to learn about:
-- [Azure VMware Solution private cloud upgrades](concepts-upgrades.md)-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)-- [Protect your Azure VMware Solution VMs with Azure Security Center integration](azure-security-integration.md)-- [Back up Azure VMware Solution VMs with Azure Backup Server](backup-azure-vmware-solution-virtual-machines.md)-- [Complete disaster recovery of virtual machines using Azure VMware Solution](disaster-recovery-for-virtual-machines.md)
+- [Azure VMware Solution private cloud upgrades](concepts-upgrades.md).
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
@@ -2,7 +2,7 @@
Title: Concepts - Network interconnectivity description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 09/21/2020 Last updated : 02/02/2021 # Azure VMware Solution networking and interconnectivity concepts
@@ -54,10 +54,11 @@ For full interconnectivity to your private cloud, enable ExpressRoute Global Rea
## Next steps
-Now that you've covered these network and interconnectivity concepts, you may want to learn about:
+Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about:
- [Azure VMware Solution storage concepts](concepts-storage.md).-- [Azure VMware Solution identity concepts](concepts-identity.md)
+- [Azure VMware Solution identity concepts](concepts-identity.md).
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
<!-- LINKS - external --> [enable Global Reach]: ../expressroute/expressroute-howto-set-global-reach.md
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-private-clouds-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
@@ -2,7 +2,7 @@
Title: Concepts - Private clouds and clusters description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and vSphere clusters. Previously updated : 10/27/2020 Last updated : 02/02/2021 # Azure VMware Solution private cloud and cluster concepts
@@ -61,10 +61,11 @@ Private cloud vCenter and NSX-T configurations are on an hourly backup schedule.
## Next steps
-Now that you've covered these Azure VMware Solution private cloud concepts, you may want to learn about:
+Now that you've covered Azure VMware Solution private cloud concepts, you may want to learn about:
- [Azure VMware Solution networking and interconnectivity concepts](concepts-networking.md). - [Azure VMware Solution storage concepts](concepts-storage.md).
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
<!-- LINKS - internal -->
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-role-based-access-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-role-based-access-control.md
@@ -2,7 +2,7 @@
Title: Concepts - vSphere role-based access control (vSphere RBAC) description: Learn about the key capabilities of vSphere role-based access control for Azure VMware Solution Previously updated : 10/23/2020 Last updated : 02/02/2021 # vSphere role-based access control (vSphere RBAC) for Azure VMware Solution
@@ -18,9 +18,6 @@ In an Azure VMware Solution deployment, the administrator doesn't have access to
The private cloud user doesn't have access to and can't configure specific management components supported and managed by Microsoft. For example, clusters, hosts, datastores, and distributed virtual switches. --- ## Azure VMware Solution CloudAdmin role on vCenter You can view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
@@ -58,7 +55,11 @@ The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
## Next steps
-Refer to the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html) for a detailed explanation of each privilege.
+Now that you've covered the basics of vSphere role-based access control for Azure VMware Solution, you may want to learn about:
+
+- The details of each privilege in the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
+- [How Azure VMware Solution monitors and repairs private clouds](concepts-monitor-repair-private-cloud.md).
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
<!-- LINKS - internal -->
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
@@ -2,7 +2,7 @@
Title: Concepts - Storage description: Learn about the key storage capabilities in Azure VMware Solution private clouds. Previously updated : 11/03/2020 Last updated : 02/02/2021 # Azure VMware Solution storage concepts
@@ -27,15 +27,19 @@ vSAN datastores use data-at-rest encryption by default. The encryption solution
## Scaling
-Native cluster storage capacity is scaled by adding hosts to a cluster. For clusters that use HE hosts, the raw cluster-wide capacity is increased by 15.4 TB with each additional host. Clusters that are built with GP hosts have their raw capacity increased by 7.7 TB with each additional host. In both types of clusters, hosts take about 10 minutes to be added to a cluster. For instructions on scaling clusters, see the [scale private cloud tutorial][tutorial-scale-private-cloud].
+Native cluster storage capacity is scaled by adding hosts to a cluster. For clusters that use HE hosts, the raw cluster-wide capacity is increased by 15.4 TB with each added host. Clusters that are built with GP hosts have their raw capacity increased by 7.7 TB with each added host. In both types of clusters, hosts take about 10 minutes to be added to a cluster. For instructions on scaling clusters, see the [scale private cloud tutorial][tutorial-scale-private-cloud].
## Azure storage integration
-You can use Azure storage services on workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides additional security and enables you to use SLA-based Azure storage services in your private cloud workloads.
+You can use Azure storage services on workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
## Next steps
-The next step is to learn about [private cloud identity concepts][concepts-identity].
+Now that you've covered Azure VMware Solution storage concepts, you may want to learn about:
+
+- [Private cloud identity concepts](concepts-identity.md).
+- [vSphere role-based access control for Azure VMware Solution](concepts-role-based-access-control.md).
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
<!-- LINKS - external-->
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-upgrades https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-upgrades.md
@@ -2,7 +2,7 @@
Title: Concepts - Private cloud updates and upgrades description: Learn about the key upgrade processes and features in Azure VMware Solution. Previously updated : 09/22/2020 Last updated : 02/02/2021 # Azure VMware Solution private cloud updates and upgrades
@@ -52,7 +52,10 @@ For more information on VMware software versions, see the [private clouds and cl
## Next steps
-The next step is to [create a private cloud](tutorial-create-private-cloud.md).
+Now that you've covered the key upgrade processes and features in Azure VMware Solution, you may want to learn about:
+
+- [How to create a private cloud](tutorial-create-private-cloud.md).
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
<!-- LINKS - external -->
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/netapp-files-with-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
@@ -2,7 +2,7 @@
Title: Azure NetApp Files with Azure VMware Solution description: Use Azure NetApp Files with Azure VMware Solution VMs to migrate and sync data across on-premises servers, Azure VMware Solution VMs, and cloud infrastructures. Previously updated : 01/20/2021 Last updated : 02/01/2021 # Azure NetApp Files with Azure VMware Solution
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-arm-restore-vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
@@ -19,7 +19,6 @@ Azure Backup provides a number of ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs.<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../best-practices-availability-paired-regions.md#what-are-paired-regions).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br> <li> [Create a VM](#create-a-vm) <br> <li> [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross-zonal restore** | Cross-zonal restore can be used to restore [Azure zone pinned VMs](https://docs.microsoft.com/azure/virtual-machines/windows/create-portal-availability-zone) in any [availability zones](https://docs.microsoft.com/azure/availability-zones/az-overview) of the same region. <br> <br> You can restore all the Azure zone pinned VMs for the selected recovery point that were backed up after the release of this feature, to the zone of your choice. By default it will restore in the same zone as it was backed up. <br> <br> This can be used during disaster recovery scenarios, if the pinned zone of the VM becomes unavailable.
> [!NOTE] > You can also recover specific files and folders on an Azure VM. [Learn more](backup-azure-restore-files-from-vm.md).
@@ -174,11 +173,9 @@ Currently, secondary region [RPO](azure-backup-glossary.md#rpo-recovery-point-ob
>- The Cross Region Restore feature restores CMK (customer-managed keys) enabled Azure VMs, which aren't backed-up in a CMK enabled Recovery Services vault, as non-CMK enabled VMs in the secondary region. >- The Azure roles needed to restore in the secondary region are the same as those in the primary region.
-## Cross-zonal restore
+[Azure zone pinned VMs](https://docs.microsoft.com/azure/virtual-machines/windows/create-portal-availability-zone) can be restored in any [availability zones](https://docs.microsoft.com/azure/availability-zones/az-overview) of the same region.
-Cross-zonal restore can be used to restore [Azure zone pinned VMs](https://docs.microsoft.com/azure/virtual-machines/windows/create-portal-availability-zone) in any [availability zones](https://docs.microsoft.com/azure/availability-zones/az-overview) of the same region.
-
-In the restore process, you'll see the option **Availability Zone.** You'll see your default zone first. To choose a different zone, choose the number of the zone of your choice. Choose a different zone if the default availability zone isn't available due to an outage, or for any other reason you choose to restore in a different zone.
+In the restore process, you'll see the option **Availability Zone.** You'll see your default zone first. To choose a different zone, choose the number of the zone of your choice. If the pinned zone is unavailable, you won't be able to restore the data to another zone because the backed-up data isn't zonally-replicated.
![Choose availability zone](./media/backup-azure-arm-restore-vms/cross-zonal-restore.png)
backup https://docs.microsoft.com/en-us/azure/backup/backup-create-rs-vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-create-rs-vault.md
@@ -66,6 +66,7 @@ Since this process is at the storage level, there are [pricing implications](htt
>- After opting-in, it might take up to 48 hours for the backup items to be available in secondary regions. >- Currently CRR for Azure VMs is supported only for Azure Resource Manger Azure VMs. Classic Azure VMs won't be supported. When additional management types support CRR, then they'll be **automatically** enrolled. >- Cross Region Restore **currently can't be reverted back** to GRS or LRS once the protection is initiated for the first time.
+>- Currently, secondary region [RPO](azure-backup-glossary.md#rpo-recovery-point-objective) is up to 12 hours from the primary region, even though [read-access geo-redundant storage (RA-GRS)](https://docs.microsoft.com/azure/storage/common/storage-redundancy#redundancy-in-a-secondary-region) replication is 15 minutes.
### Configure Cross Region Restore
backup https://docs.microsoft.com/en-us/azure/backup/tutorial-sap-hana-manage-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-sap-hana-manage-cli.md
@@ -73,6 +73,224 @@ Name Resource Group
cb110094-9b15-4c55-ad45-6899200eb8dd SAPHANA ```
+## Create incremental backup policy
+
+To create an incremental backup policy, execute the [az backup policy create](https://docs.microsoft.com/cli/azure/backup/policy#az_backup_policy_create) command with the following parameters:
+
+* **--backup-management-type** ΓÇô Azure Workload
+* **--workload-type** - SAPHana
+* **--name** ΓÇô Name of the policy
+* **--policy** - JSON file with appropriate details for schedule and retention
+* **--resource-group** - Resource group of the vault
+* **--vault-name** ΓÇô Name of the vault
+
+Example:
+
+```azurecli
+az backup policy create --resource-group saphanaResourceGroup --vault-name saphanaVault --name sappolicy --backup-management-type AzureWorkload --policy sappolicy.json --workload-type SAPHana
+```
+
+Sample JSON (sappolicy.json) output:
+
+```json
+ "eTag": null,
+ "id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/saphanaResourceGroup/providers/Microsoft.RecoveryServices/vaults/saphanaVault/backupPolicies/sappolicy",
+ "location": null,
+ "name": "sappolicy",
+ "properties": {
+ "backupManagementType": "AzureWorkload",
+ "makePolicyConsistent": null,
+ "protectedItemsCount": 0,
+ "settings": {
+ "isCompression": false,
+ "issqlcompression": false,
+ "timeZone": "UTC"
+ },
+ "subProtectionPolicy": [
+ {
+ "policyType": "Full",
+ "retentionPolicy": {
+ "dailySchedule": null,
+ "monthlySchedule": {
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Months"
+ },
+ "retentionScheduleDaily": null,
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2021-01-19T00:30:00+00:00"
+ ]
+ },
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionDuration": {
+ "count": 104,
+ "durationType": "Weeks"
+ },
+ "retentionTimes": [
+ "2021-01-19T00:30:00+00:00"
+ ]
+ },
+ "yearlySchedule": {
+ "monthsOfYear": [
+ "January"
+ ],
+ "retentionDuration": {
+ "count": 10,
+ "durationType": "Years"
+ },
+ "retentionScheduleDaily": null,
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2021-01-19T00:30:00+00:00"
+ ]
+ }
+ },
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunDays": [
+ "Sunday"
+ ],
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunTimes": [
+ "2021-01-19T00:30:00+00:00"
+ ],
+ "scheduleWeeklyFrequency": 0
+ }
+ },
+ {
+ "policyType": "Incremental",
+ "retentionPolicy": {
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ },
+ "retentionPolicyType": "SimpleRetentionPolicy"
+ },
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunDays": [
+ "Monday",
+ "Tuesday",
+ "Wednesday",
+ "Thursday",
+ "Friday",
+ "Saturday"
+ ],
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunTimes": [
+ "2017-03-07T02:00:00+00:00"
+ ],
+ "scheduleWeeklyFrequency": 0
+ }
+ },
+ {
+ "policyType": "Log",
+ "retentionPolicy": {
+ "retentionDuration": {
+ "count": 15,
+ "durationType": "Days"
+ },
+ "retentionPolicyType": "SimpleRetentionPolicy"
+ },
+ "schedulePolicy": {
+ "scheduleFrequencyInMins": 120,
+ "schedulePolicyType": "LogSchedulePolicy"
+ }
+ }
+ ],
+ "workLoadType": "SAPHanaDatabase"
+ },
+ "resourceGroup": "azurefiles",
+ "tags": null,
+ "type": "Microsoft.RecoveryServices/vaults/backupPolicies"
+}
+```
+
+You can modify the following section of the policy to specify the desired backup frequency and retention for incremental backups.
+
+For example:
+
+```json
+{
+ "policyType": "Incremental",
+ "retentionPolicy": {
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ },
+ "retentionPolicyType": "SimpleRetentionPolicy"
+ },
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunDays": [
+ "Monday",
+ "Tuesday",
+ "Wednesday",
+ "Thursday",
+ "Friday",
+ "Saturday"
+ ],
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunTimes": [
+ "2017-03-07T02:00:00+00:00"
+ ],
+ "scheduleWeeklyFrequency": 0
+ }
+}
+```
+
+Example:
+
+If you want to have incremental backups only on Saturday and retain them for 60 days, make the following changes in the policy:
+
+* Update **retentionDuration** count to 60 days
+* Specify only Saturday as **ScheduleRunDays**
+
+```json
+ {
+ "policyType": "Incremental",
+ "retentionPolicy": {
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Days"
+ },
+ "retentionPolicyType": "SimpleRetentionPolicy"
+ },
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunDays": [
+ "Saturday"
+ ],
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunTimes": [
+ "2017-03-07T02:00:00+00:00"
+ ],
+ "scheduleWeeklyFrequency": 0
+ }
+}
+```
+ ## Protect new databases added to an SAP HANA instance [Registering an SAP HANA instance with a Recovery Services vault](tutorial-sap-hana-backup-cli.md#register-and-protect-the-sap-hana-instance) automatically discovers all the databases on this instance.
batch https://docs.microsoft.com/en-us/azure/batch/batch-low-pri-vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-low-pri-vms.md
@@ -3,7 +3,7 @@ Title: Run workloads on cost-effective low-priority VMs
description: Learn how to provision low-priority VMs to reduce the cost of Azure Batch workloads. Previously updated : 09/08/2020 Last updated : 02/02/2021
@@ -14,120 +14,58 @@ Azure Batch offers low-priority virtual machines (VMs) to reduce the cost of Bat
Low-priority VMs take advantage of surplus capacity in Azure. When you specify low-priority VMs in your pools, Azure Batch can use this surplus, when available.
-The tradeoff for using low-priority VMs is that those VMs may not always be available to be allocated, or may be preempted at any time, depending on available capacity. For this reason, low-priority VMs are most suitable for certain types of workloads. Use low-priority VMs for batch and asynchronous processing workloads where the job completion time is flexible and the work is distributed across many VMs.
+The tradeoff for using low-priority VMs is that those VMs may not always be available to be allocated, or may be preempted at any time, depending on available capacity. For this reason, low-priority VMs are most suitable for batch and asynchronous processing workloads where the job completion time is flexible and the work is distributed across many VMs.
Low-priority VMs are offered at a significantly reduced price compared with dedicated VMs. For pricing details, see [Batch Pricing](https://azure.microsoft.com/pricing/details/batch/). > [!NOTE] > [Spot VMs](https://azure.microsoft.com/pricing/spot/) are now available for [single instance VMs](../virtual-machines/spot-vms.md) and [VM scale sets](../virtual-machine-scale-sets/use-spot.md). Spot VMs are an evolution of low-priority VMs, but differ in that pricing can vary and an optional maximum price can be set when allocating Spot VMs. >
-> Azure Batch pools will start supporting Spot VMs within a few months of them being generally available, with new versions of the [Batch APIs and tools](./batch-apis-tools.md). Once Spot VM support is available, low-priority VMs will be deprecated - they will continue to be supported using current APIs and tool versions for at least 12 months, to allow sufficient time for migration to Spot VMs.
+> Azure Batch pools will start supporting Spot VMs within a few months of them being generally available, with new versions of the [Batch APIs and tools](./batch-apis-tools.md). Once Spot VM support is available, low-priority VMs will be deprecated - they will continue to be supported using current APIs and tool versions for at least 12 months, to allow sufficient time for migration to Spot VMs.
> > Spot VMs will not be supported for [Cloud Service Configuration](/rest/api/batchservice/pool/add#cloudserviceconfiguration) pools. To use Spot VMs, Cloud Service pools will need to be migrated to [Virtual Machine Configuration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools.
-## Use cases for low-priority VMs
-
-Given the characteristics of low-priority VMs, what workloads can and cannot use
-them? In general, batch processing workloads are a good fit, as jobs are broken
-into many parallel tasks or there are many jobs that are scaled out and
-distributed across many VMs.
--- To maximize use of surplus capacity in Azure, suitable jobs
- can scale out.
--- Occasionally VMs may not be available or are preempted, which results
- in reduced capacity for jobs and may lead to task interruption and
- reruns. Jobs must therefore be flexible in the time they can take to run.
--- Jobs with longer tasks may be impacted more if interrupted. If long-running
- tasks implement checkpointing to save progress as they execute, then the
- impact of interruption is reduced. Tasks with shorter execution times
- tend to work best with low-priority VMs, because the impact of interruption is far
- less.
--- Long-running MPI jobs that utilize multiple VMs are not well suited to use
- low-priority VMs, because one preempted VM can lead to the whole job
- having to run again.
-
-Some examples of batch processing use cases well suited to use low-priority VMs
-are:
--- **Development and testing**: In particular, if large-scale solutions are
- being developed, significant savings can be realized. All types of testing
- can benefit, but large-scale load testing and regression testing are great
- uses.
--- **Supplementing on-demand capacity**: Low-priority VMs can be used to
- supplement regular dedicated VMs - when available, jobs can scale and
- therefore complete quicker for lower cost; when not available, the baseline
- of dedicated VMs remains available.
--- **Flexible job execution time**: If there is flexibility in the time jobs
- have to complete, then potential drops in capacity can be tolerated;
- however, with the addition of low-priority VMs jobs frequently run
- faster and for a lower cost.
+## Batch support for low-priority VMs
-Batch pools can be configured to use low-priority VMs in a few ways, depending
-on the flexibility in job execution time:
+Azure Batch provides several capabilities that make it easy to consume and benefit from low-priority VMs:
-- Low-priority VMs can solely be used in a pool. In this case, Batch recovers
- any preempted capacity when available. This configuration is the cheapest way to execute
- jobs, as only low-priority VMs are used.
+- Batch pools can contain both dedicated VMs and low-priority VMs. The number of each type of VM can be specified when a pool is created, or changed at any time for an existing pool, using the explicit resize operation or using auto-scale. Job and task submission can remain unchanged, regardless of the VM types in the pool. You can also configure a pool to completely use low-priority VMs to run jobs as cheaply as possible, but spin up dedicated VMs if the capacity drops below a minimum threshold, to keep jobs running.
+- Batch pools automatically seek the target number of low-priority VMs. If VMs are preempted or unavailable, Batch attempts to replace the lost capacity and return to the target.
+- When tasks are interrupted, Batch detects and automatically requeues tasks to run again.
+- Low-priority VMs have a separate vCPU quota that differs from the one for dedicated VMs. The quota for low-priority VMs is higher than the quota for dedicated VMs, because low-priority VMs cost less. For more information, see [Batch service quotas and limits](batch-quota-limit.md#resource-quotas).
-- Low-priority VMs can be used in conjunction with a fixed baseline of
- dedicated VMs. The fixed number of dedicated VMs ensures there is always
- some capacity to keep a job progressing.
+> [!NOTE]
+> Low-priority VMs are not currently supported for Batch accounts created in [user subscription mode](accounts.md).
-- There can be dynamic mix of dedicated and low-priority VMs, so that the
- cheaper low-priority VMs are solely used when available, but the full-priced
- dedicated VMs are scaled up when required. This configuration keeps a minimum amount of
- capacity available to keep the jobs progressing.
+## Considerations and use cases
-## Batch support for low-priority VMs
+Many Batch workloads are a good fit for low-priority VMs. Consider using them when jobs are broken into many parallel tasks, or when you have many jobs that are scaled out and distributed across many VMs.
-Azure Batch provides several capabilities that make it easy to consume and
-benefit from low-priority VMs:
+Some examples of batch processing use cases well suited to use low-priority VMs are:
-- Batch pools can contain both dedicated VMs and low-priority VMs. The number
- of each type of VM can be specified when a pool is created, or changed at
- any time for an existing pool, using the explicit resize operation or using
- auto-scale. Job and task submission can remain unchanged, regardless of the VM types in the pool. You can also configure a pool to completely use low-priority VMs to run jobs as cheaply as possible, but
- spin up dedicated VMs if the capacity drops below a minimum threshold, to
- keep jobs running.
+- **Development and testing**: In particular, if large-scale solutions are being developed, significant savings can be realized. All types of testing can benefit, but large-scale load testing and regression testing are great uses.
+- **Supplementing on-demand capacity**: Low-priority VMs can be used to supplement regular dedicated VMs. When available, jobs can scale and therefore complete quicker for lower cost; when not available, the baseline of dedicated VMs remains available.
+- **Flexible job execution time**: If there is flexibility in the time jobs have to complete, then potential drops in capacity can be tolerated; however, with the addition of low-priority VMs jobs frequently run faster and for a lower cost.
-- Batch pools automatically seek the target number of low-priority VMs. If
- VMs are preempted, then Batch attempts to replace the lost capacity and
- return to the target.
+Batch pools can be configured to use low-priority VMs in a few ways:
-- When tasks are interrupted, Batch detects and automatically
- requeues tasks to run again.
+- A pool can use only low-priority VMs. In this case, Batch recovers any preempted capacity when available. This configuration is the cheapest way to execute jobs.
+- Low-priority VMs can be used in conjunction with a fixed baseline of dedicated VMs. The fixed number of dedicated VMs ensures there is always some capacity to keep a job progressing.
+- A pool can use a dynamic mix of dedicated and low-priority VMs, so that the cheaper low-priority VMs are solely used when available, but the full-priced dedicated VMs are scaled up when required. This configuration keeps a minimum amount of capacity available to keep the jobs progressing.
-- Low-priority VMs have a separate vCPU quota that differs from the one for dedicated VMs.
- The quota for low-priority VMs is higher than the quota for dedicated VMs, because
- low-priority VMs cost less. For more information, see [Batch service quotas and limits](batch-quota-limit.md#resource-quotas).
+Keep in mind the following when planning your use of low-priority VMs:
-> [!NOTE]
-> Low-priority VMs are not currently supported for Batch accounts created in [user subscription mode](accounts.md).
+- To maximize use of surplus capacity in Azure, suitable jobs can scale out.
+- Occasionally VMs may not be available or are preempted, which results in reduced capacity for jobs and may lead to task interruption and reruns.
+- Tasks with shorter execution times tend to work best with low-priority VMs. Jobs with longer tasks may be impacted more if interrupted. If long-running tasks implement checkpointing to save progress as they execute, this impact may be reduced.
+- Long-running MPI jobs that utilize multiple VMs are not well suited to use low-priority VMs, because one preempted VM can lead to the whole job having to run again.
+- Low-priority nodes may be marked as unusable if [network security group (NSG) rules](batch-virtual-network.md#network-security-groups-specifying-subnet-level-rules) are configured incorrectly.
-## Create and update pools
+## Create and manage pools with low-priority VMs
A Batch pool can contain both dedicated and low-priority VMs (also referred to as compute nodes). You can set the target number of compute nodes for both dedicated and low-priority VMs. The target number of nodes specifies the number of VMs you want to have in the pool.
-For example, to create a pool using Azure cloud service VMs with a target of 5 dedicated VMs and
-20 low-priority VMs:
-
-```csharp
-CloudPool pool = batchClient.PoolOperations.CreatePool(
- poolId: "cspool",
- targetDedicatedComputeNodes: 5,
- targetLowPriorityComputeNodes: 20,
- virtualMachineSize: "Standard_D2_v2",
- cloudServiceConfiguration: new CloudServiceConfiguration(osFamily: "5") // WS 2016
-);
-```
-
-To create a pool using Azure virtual machines (in this case Linux VMs) with a target of 5 dedicated VMs and
-20 low-priority VMs:
+For example, to create a pool using Azure virtual machines (in this case Linux VMs) with a target of 5 dedicated VMs and 20 low-priority VMs:
```csharp ImageReference imageRef = new ImageReference(
@@ -162,15 +100,21 @@ low-priority VM:
bool? isNodeDedicated = poolNode.IsDedicated; ```
-For Virtual Machine Configuration pools, when one or more nodes are preempted, a list nodes operation on the pool still returns those nodes. The current number of low-priority nodes remains unchanged, but those nodes have their state set to the **Preempted** state. Batch attempts to find replacement VMs and, if successful, the nodes go through **Creating** and then **Starting** states before becoming available for task execution, just like new nodes.
+VMs may occasionally be preempted. When this happens, tasks that were running on the preempted node VMs are requeued and run again.
-## Scale a pool containing low-priority VMs
+For Virtual Machine Configuration pools, Batch also does the following:
+
+- The preempted VMs have their state updated to **Preempted**.
+- The VM is effectively deleted, leading to loss of any data stored locally on the VM.
+- A list nodes operation on the pool will still return the preempted nodes.
+- The pool continually attempts to reach the target number of low-priority nodes available. When replacement capacity is found, the nodes keep their IDs, but are reinitialized, going through **Creating** and **Starting** states before they are available for task scheduling.
+- Preemption counts are available as a metric in the Azure portal.
-As with pools solely consisting of dedicated VMs, it is possible to scale a
-pool containing low-priority VMs by calling the Resize method or by using autoscale.
+## Scale pools containing low-priority VMs
-The pool resize operation takes a second optional parameter that updates the
-value of **targetLowPriorityNodes**:
+As with pools solely consisting of dedicated VMs, it is possible to scale a pool containing low-priority VMs by calling the Resize method or by using autoscale.
+
+The pool resize operation takes a second optional parameter that updates the value of **targetLowPriorityNodes**:
```csharp pool.Resize(targetDedicatedComputeNodes: 0, targetLowPriorityComputeNodes: 25);
@@ -178,48 +122,18 @@ pool.Resize(targetDedicatedComputeNodes: 0, targetLowPriorityComputeNodes: 25);
The pool autoscale formula supports low-priority VMs as follows: -- You can get or set the value of the service-defined variable
- **$TargetLowPriorityNodes**.
--- You can get the value of the service-defined variable
- **$CurrentLowPriorityNodes**.
+- You can get or set the value of the service-defined variable **$TargetLowPriorityNodes**.
+- You can get the value of the service-defined variable **$CurrentLowPriorityNodes**.
+- You can get the value of the service-defined variable **$PreemptedNodeCount**. This variable returns the number of nodes in the preempted state and allows you to scale up or down the number of dedicated nodes, depending on the number of preempted nodes that are unavailable.
-- You can get the value of the service-defined variable **$PreemptedNodeCount**.
- This variable returns the number of nodes in the preempted state and allows you to
- scale up or down the number of dedicated nodes, depending on the number of
- preempted nodes that are unavailable.
+## Configure jobs and tasks
-## Jobs and tasks
+Jobs and tasks require little additional configuration for low-priority nodes. Keep in mind the following:
-Jobs and tasks require little additional configuration for low-priority nodes; the only
-support is as follows:
+- The JobManagerTask property of a job has an **AllowLowPriorityNode** property. When this property is true, the job manager task can be scheduled on either a dedicated or low-priority node. If it's false, the job manager task is scheduled to a dedicated node only.
+- The AZ_BATCH_NODE_IS_DEDICATED [environment variable](batch-compute-node-environment-variables.md) is available to a task application so that it can determine whether it is running on a low-priority or on a dedicated node.
-- The JobManagerTask property of a job has a new property, **AllowLowPriorityNode**.
- When this property is true, the job manager task can be scheduled on either a dedicated
- or low-priority node. If this property is false, the job manager task is
- scheduled to a dedicated node only.
--- An [environment
- variable](batch-compute-node-environment-variables.md)
- is available to a task application so that it can determine whether it is running on a
- low-priority or dedicated node. The environment variable is AZ_BATCH_NODE_IS_DEDICATED.
-
-## Handling preemption
-
-VMs may occasionally be preempted. When this happens, tasks that were running on the preempted node VMs are requeued and run again.
-
-For Virtual Machine Configuration pools, Batch also does the following:
--- The preempted VMs have their state updated to **Preempted**.-- The VM is effectively deleted, leading to loss of any data stored locally on the VM.-- The pool continually attempts to reach the target number
- of low-priority nodes available. When replacement capacity is found, the
- nodes keep their IDs, but are reinitialized, going through
- **Creating** and **Starting** states before they are available for task
- scheduling.
-- Preemption counts are available as a metric in the Azure portal.-
-## Metrics
+## View metrics for low-priority VMs
New metrics are available in the [Azure portal](https://portal.azure.com) for low-priority nodes. These metrics are:
@@ -227,16 +141,14 @@ New metrics are available in the [Azure portal](https://portal.azure.com) for lo
- Low-Priority Core Count - Preempted Node Count
-To view metrics in the Azure portal:
+To view these metrics in the Azure portal
-1. Navigate to your Batch account in the portal, and view the settings for your Batch account.
+1. Navigate to your Batch account in the Azure portal.
2. Select **Metrics** from the **Monitoring** section.
-3. Select the metrics you desire from the **Available Metrics** list.
-
-![Screenshot showing metric selection for low-priority nodes.](media/batch-low-pri-vms/low-pri-metrics.png)
+3. Select the metrics you desire from the **Metric** list.
## Next steps - Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.-- Start to plan the move from low-priority VMs to Spot VMs. If you use low-priority VMs with **Cloud Service configuration** pools, then plan to move to **Virtual Machine configuration** pools.
+- Start to plan the move from low-priority VMs to Spot VMs. If you use low-priority VMs with **Cloud Service configuration** pools, plan to migrate to [**Virtual Machine configuration** pools](nodes-and-pools.md#configurations) instead.
cdn https://docs.microsoft.com/en-us/azure/cdn/cdn-map-content-to-custom-domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-map-content-to-custom-domain.md
@@ -7,7 +7,7 @@
Previously updated : 11/06/2020 Last updated : 02/04/2020 # As a website owner, I want to add a custom domain to my CDN endpoint so that my users can use my custom domain to access my content.
@@ -158,6 +158,10 @@ To create a CNAME record for your custom domain:
After you've registered your custom domain, you can then add it to your CDN endpoint. ++
+# [**Azure portal**](#tab/azure-portal)
+ 1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to the CDN profile containing the endpoint that you want to map to a custom domain. 2. On the **CDN profile** page, select the CDN endpoint to associate with the custom domain.
@@ -184,7 +188,43 @@ After you've registered your custom domain, you can then add it to your CDN endp
- For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute. - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
+# [**PowerShell**](#tab/azure-powershell)
+
+1. Sign in to Azure PowerShell:
+
+```azurepowershell-interactive
+ Connect-AzAccount
+
+```
+2. Use [New-AzCdnCustomDomain](/powershell/module/az.cdn/new-azcdncustomdomain) to map the custom domain to your CDN endpoint.
+
+ * Replace **myendpoint8675.azureedge.net** with your endpoint url.
+ * Replace **myendpoint8675** with your CDN endpoint name.
+ * Replace **www.contoso.com** with your custom domain name.
+ * Replace **myCDN** with your CDN profile name.
+ * Replace **myResourceGroupCDN** with your resource group name.
+
+```azurepowershell-interactive
+ $parameters = @{
+ Hostname = 'myendpoint8675.azureedge.net'
+ EndPointName = 'myendpoint8675'
+ CustomDomainName = 'www.contoso.com'
+ ProfileName = 'myCDN'
+ ResourceGroupName = 'myResourceGroupCDN'
+ }
+ New-AzCdnCustomDomain @parameters
+```
+
+Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated.
+
+ It can take some time for the new custom domain settings to propagate to all CDN edge nodes:
+
+- For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes.
+- For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
+- For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
++ ## Verify the custom domain After you've completed the registration of your custom domain, verify that the custom domain references your CDN endpoint.
@@ -195,6 +235,9 @@ After you've completed the registration of your custom domain, verify that the c
## Clean up resources +
+# [**Azure portal**](#tab/azure-portal-cleanup)
+ If you no longer want to associate your endpoint with a custom domain, remove the custom domain by doing the following steps: 1. In your CDN profile, select the endpoint with the custom domain that you want to remove.
@@ -203,6 +246,29 @@ If you no longer want to associate your endpoint with a custom domain, remove th
The custom domain is disassociated from your endpoint.
+# [**PowerShell**](#tab/azure-powershell-cleanup)
+
+If you no longer want to associate your endpoint with a custom domain, remove the custom domain by doing the following steps:
+
+1. Use [Remove-AzCdnCustomDomain](/powershell/module/az.cdn/remove-azcdncustomdomain) to remove the custom domain from the endpoint:
+
+ * Replace **myendpoint8675** with your CDN endpoint name.
+ * Replace **www.contoso.com** with your custom domain name.
+ * Replace **myCDN** with your CDN profile name.
+ * Replace **myResourceGroupCDN** with your resource group name.
++
+```azurepowershell-interactive
+ $parameters = @{
+ CustomDomainName = 'www.contoso.com'
+ EndPointName = 'myendpoint8675'
+ ProfileName = 'myCDN'
+ ResourceGroupName = 'myResourceGroupCDN'
+ }
+ Remove-AzCdnCustomDomain @parameters
+```
++ ## Next steps In this tutorial, you learned how to:
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-powershell.md
@@ -135,7 +135,7 @@ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
$extension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1' $storageAccountKey = Get-AzStorageAccountKey -ResourceGroupName "ContosOrg" -Name "contosostorageaccount"
- $configFile = "<WAD configuration file path>"
+ $configFile = "<WAD public configuration file path>"
$wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "ContosSA" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true $extensionProfile = @{extension = @($rdpExtension, $wadExtension)} ```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/includes/quickstarts-sdk/csharp-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/csharp-sdk.md
@@ -250,9 +250,21 @@ In your **Program** class, save a reference to the URL of the image you want to
### Call the Read API
-Add the following method, which calls the **ReadAsync** method for the given image. This returns an operation ID and starts an asynchronous process to read the content of the image. Next, get the operation ID returned from the **ReadAsync** call, and use it to poll the service for the results of the operation. Finally, print the extracted text to the console.
+Define the new method for reading text. Add the code below, which calls the **ReadAsync** method for the given image. This returns an operation ID and starts an asynchronous process to read the content of the image.
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ComputerVisionQuickstart.cs?name=snippet_read_url)]
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ComputerVisionQuickstart.cs?name=snippet_readfileurl_1)]
+
+### Get Read results
+
+Next, get the operation ID returned from the **ReadAsync** call, and use it to query the service for operation results. The following code checks the operation until the results are returned. It then prints the extracted text data to the console.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ComputerVisionQuickstart.cs?name=snippet_readfileurl_2)]
+
+### Display Read results
+
+Add the following code to parse and display the retrieved text data, and finish the method definition.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ComputerVisionQuickstart.cs?name=snippet_readfileurl_3)]
## Run the application
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/How-To/use-active-learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/use-active-learning.md
@@ -13,11 +13,18 @@ Last updated 03/18/2020
Your knowledge base doesn't change automatically. In order for any change to take effect, you must accept the suggestions. These suggestions add questions but don't change or remove existing questions. - ## Upgrade runtime version to use active learning
+# [QnA Maker GA (stable release)](#tab/v1)
+ Active Learning is supported in runtime version 4.4.0 and above. If your knowledge base was created on an earlier version, [upgrade your runtime](set-up-qnamaker-service-azure.md#get-the-latest-runtime-updates) to use this feature.
+# [QnA Maker managed (preview release)](#tab/v2)
+
+In QnA Maker managed (Preview), since the runtime is hosted by the QnA Maker service itself, there is no need to upgrade the runtime manually.
+++ ## Turn on active learning for alternate questions # [QnA Maker GA (stable release)](#tab/v1)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/concepts-disclosure-guidelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/concepts-disclosure-guidelines.md
@@ -69,12 +69,8 @@ Use the following diagram to determine whether your synthetic voice experience r
![Disclosure assessment diagram](media/responsible-ai/disclosure-guidelines/flowchart.png)
-## Reference docs
+## See also
-* [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent)
-* [Guidelines for Responsible Deployment of Synthetic Voice Technology](concepts-guidelines-responsible-deployment-synthetic.md)
-* [Gating overview](concepts-gating-overview.md)
-
-## Next steps
-
-* [Disclosure design patterns](concepts-disclosure-patterns.md)
+* [Disclosure design patterns](concepts-disclosure-patterns.md)
+* [Disclosure for Voice Talent](https://docs.microsoft.com/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
+* [Guidelines for Responsible Deployment of Synthetic Voice Technology](concepts-guidelines-responsible-deployment-synthetic.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/concepts-disclosure-patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/concepts-disclosure-patterns.md
@@ -232,21 +232,14 @@ Use disclosure as an opportunity to fail gracefully.
- [Providing opportunities to learn more about how the voice was made](#providing-opportunities-to-learn-more-about-how-the-voice-was-made) - [Handoff to human](#conversational-transparency) -- ## Additional resources - [Microsoft Bot Guidelines](https://www.microsoft.com/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf) - [Cortana Design Guidelines](/cortana/voice-commands/voicecommand-design-guidelines) - [Microsoft Windows UWP Speech Design Guidelines](/windows/uwp/design/input/speech-interactions) - [Microsoft Windows Mixed Reality Voice Commanding Guidelines](/windows/mixed-reality/voice-design#top-things-users-should-know-about-speech-in-mixed-reality)
-## Reference docs
+## See also
-* [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent)
+* [Disclosure for Voice Talent](https://docs.microsoft.com/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
* [Guidelines for Responsible Deployment of Synthetic Voice Technology](concepts-guidelines-responsible-deployment-synthetic.md)
-* [Gating Overview](concepts-gating-overview.md)
-* [How to Disclose](concepts-disclosure-guidelines.md)
-
-## Next steps
-
-* [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent)
+* [How to Disclose](concepts-disclosure-guidelines.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/concepts-gating-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/concepts-gating-overview.md
@@ -1,48 +0,0 @@
- Title: Custom Neural Voice Gating overview-
-description: Introduction to the gating process for custom neural voice.
------ Previously updated : 10/09/2019--
-# Custom Neural Voice gating overview
-
-Learn more about the process for getting started with Custom Neural Voice.
-
-## Commitment to responsible innovation
-
-As part of Microsoft's commitment to designing responsible AI, we have assembled a set of materials to guide customers in using Custom Neural Voice. The guidelines and insights found here are based on Microsoft's [principles for responsible innovation in AI.](https://www.microsoft.com/AI/our-approach-to-ai)
-
-### Guidance for deploying Custom Neural Voice
--- [Guidelines for Responsible Deployment](concepts-guidelines-responsible-deployment-synthetic.md): our top recommendations based on our research-- [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent): what you need to know and inform voice talent about the technology to use it responsibly-- [Disclosure Design](concepts-disclosure-guidelines.md): how to design experiences so that users know when a synthetic voice is being used and trust your service-
-### Why Custom Neural Voice is a gated technology
-
-Our intent is to protect the rights of individuals and society, foster transparent human-computer interactions, and counteract the proliferation of harmful deepfakes and misleading content. For this reason, we have gated the use of Custom Neural Voice. Customers gain access to the technology only after their applications are reviewed and they have committed to using it in alignment with our ethics principles.
-
-### Our gating process
-
-To get access to Custom Neural Voice, you'll need to start by filling out our online intake form. Begin your application [here](https://aka.ms/custom-neural-intake-form).
-
-Access to the Custom Neural Voice service is subject to MicrosoftΓÇÖs sole discretion based on our eligibility criteria, vetting process, and availability to support a limited number of customers during this gated preview.
-
-As part of the application process, you will need to commit to obtaining explicit written permission from voice talent prior to creating a voice font, which includes sharing the [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent). You must also agree that when deploying the voice font, your implementation will [disclose the synthetic nature](concepts-disclosure-guidelines.md) of the service to users, provide attribution to the Microsoft synthetic speech service in your terms of service, and support a feedback channel that allows users of the service to report issues and share details with Microsoft. Learn more about our Terms of Use [here](/legal/cognitive-services/speech-service/tts-code-of-conduct).
-
-## Reference docs
-
-* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent)
-* [Guidelines for responsible deployment of synthetic voice technology](concepts-guidelines-responsible-deployment-synthetic.md)
-* [How to Disclose](concepts-disclosure-guidelines.md)
-
-## Next steps
-
-* [Guidelines for responsible deployment of synthetic voice technology](concepts-guidelines-responsible-deployment-synthetic.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/concepts-guidelines-responsible-deployment-synthetic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/concepts-guidelines-responsible-deployment-synthetic.md
@@ -14,15 +14,7 @@
# Guidelines for responsible deployment of synthetic voice technology
-## General considerations to keep in mind when implementing AI systems
-
-This article talks specifically about synthetic speech and Custom Neural Voice and the key considerations for making use of this technology responsibly. In general, however, there are several things you need to consider carefully when deciding how to use and implement AI-powered products and features:
-
-* Will this product or feature perform well in my scenario? Before deploying AI into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
-* Are we equipped to identify and respond to errors? AI-powered products and features will not always be 100% accurate, so consider how you will identify and respond to any errors that may occur.
-
-## General guidelines for using synthetic voice technology
-Here are MicrosoftΓÇÖs general design guidelines for using synthetic voice technology. These were developed in studies that Microsoft conducted with voice talent, consumers, as well as individuals with speech disorders to guide the responsible development of synthetic voice.
+In this article, you learn about MicrosoftΓÇÖs general design guidelines for using synthetic voice technology. These guidelines were developed in studies that Microsoft conducted with voice talent, consumers, and individuals with speech disorders to guide the responsible development of synthetic voices.
For deployment of synthetic speech technology, the following guidelines apply across most scenarios.
@@ -34,10 +26,12 @@ Microsoft requires its customers to disclose the synthetic nature of custom neur
* Consider proper disclosure to parents or other parties with use cases that are designed for minors and children - If your use case is intended for minors or children, you will need to ensure that the parents or legal guardians are able to understand the disclosure about the use of synthetic media and make the right decision for the minors or children on whether to use the experience. ### Select appropriate voice types for your scenario
-Carefully consider the context of use and the potential harms associated with using synthetic voice. For example, high-fidelity synthetic voices may not be appropriate in high-risk scenarios, such as for personal messaging, financial transactions, or complex situations that require human adaptability or empathy. Users may also have different expectations for voice types. For example, when listening to sensitive news being read by a synthetic voice, some users prefer a more empathetic and human-like reading of the news, while others preferred a more monotone, unbiased voice. Consider testing your application to better understand user preferences.
+Carefully consider the context of use and the potential harms associated with using synthetic voice. For example, high-fidelity synthetic voices may not be appropriate in high-risk scenarios, such as for personal messaging, financial transactions, or complex situations that require human adaptability or empathy.
+
+Users may also have different expectations for voice types. For example, when listening to sensitive news read by a synthetic voice, some users prefer a more empathetic and human-like tone, while others prefer an unbiased voice. Consider testing your application to better understand user preferences.
### Be transparent about capabilities and limitations
-Users are more likely to have higher expectations when interacting with high-fidelity synthetic voice agents. Consequently, when system capabilities don't meet those expectations, trust can suffer, and may result in unpleasant, or even harmful experiences.
+Users are more likely to have higher expectations when interacting with high-fidelity synthetic voice agents. When system capabilities don't meet those expectations, trust can suffer, and may result in unpleasant, or even harmful experiences.
### Provide optional human support In ambiguous, transactional scenarios (for example, a call support center), users don't always trust a computer agent to appropriately respond to their requests. Human support may be necessary in these situations, regardless of the realistic quality of the voice or capability of the system.
@@ -54,7 +48,7 @@ Some voice talents are unaware of the potential malicious uses of the technology
When working with individuals with speech disorders, to create or deploy synthetic voice technology, the following guidelines apply. ### Provide guidelines to establish contracts
-Provide guidelines for establishing contracts with individuals who use synthetic voice for assistance in speaking. The contract should consider specifying the parties who own the voice, duration of use, ownership transfer criteria, procedures for deleting the voice font, and how to prevent unauthorized access. Additionally, enable the contractual transfer of voice font ownership after death to family members if that person has given permission.
+Provide guidelines for establishing contracts with individuals who use synthetic voice for assistance in speaking. The contract should consider specifying the parties who own the voice, duration of use, ownership transfer criteria, procedures for deleting the voice font, and how to prevent unauthorized access. Additionally, enable the transfer of voice font ownership after death to family members, if permission has been given.
### Account for inconsistencies in speech patterns For individuals with speech disorders who record their own voice fonts, inconsistencies in their speech pattern (slurring or inability to pronounce certain words) may complicate the recording process. In these cases, synthetic voice technology and recording sessions should accommodate them (that is, provide breaks and additional number of recording sessions).
@@ -63,15 +57,8 @@ For individuals with speech disorders who record their own voice fonts, inconsis
Individuals with speech disorders desire to make updates to their synthetic voice to reflect aging (for example, a child reaching puberty). Individuals may also have stylistic preferences that change over time, and may want to make changes to pitch, accent, or other voice characteristics.
-## Reference docs
-
-* [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent)
-* [Gating Overview](concepts-gating-overview.md)
-* [How to Disclose](concepts-disclosure-guidelines.md)
-* [Disclosure Design Patterns](concepts-disclosure-patterns.md)
-
-## Next steps
+## See also
-* [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent)
+* [Disclosure for Voice Talent](https://docs.microsoft.com/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
* [How to Disclose](concepts-disclosure-guidelines.md)
-* [Disclosure Design Patterns](concepts-disclosure-patterns.md)
+* [Disclosure Design Patterns](concepts-disclosure-patterns.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/custom-neural-voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-neural-voice.md
@@ -0,0 +1,88 @@
+
+ Title: Custom neural voice overview - Speech service
+
+description: Custom Neural Voice is a text-to-Speech feature that allows you to create a one-of-a-kind customized synthetic voice for your applications by providing your own audio data as a sample.
++++++ Last updated : 02/01/2020+++
+# What is custom neural voice?
+
+Custom Neural Voice is a
+[text-to-Speech](https://docs.microsoft.com/azure/cognitive-services/speech-service/text-to-speech)
+(TTS) feature that allows you to create a one-of-a-kind customized synthetic voice for your applications by providing your own audio data as a sample. Text-to-Speech works by converting text into synthetic speech using a machine learning model that sounds like a chosen voice. With the [REST API](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-text-to-speech),
+you can enable your apps to speak with [pre-built voices](https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support#neural-voices)
+or your own [custom voice](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-custom-voice-prepare-data)
+models developed through the Custom Neural Voice feature. Custom Neural
+Voice is based on Neural TTS technology that creates a natural sounding
+voice that is often indistinguishable when compared with a human voice.
+The realistic and natural sounding voice of Custom Neural Voice can
+represent brands, personify machines, and allow users to interact with
+applications conversationally in a natural way.
+
+> [!NOTE]
+> The Custom Neural Voice feature requires registration, and access to it is limited based upon MicrosoftΓÇÖs eligibility and use criteria. Customers who wish to use this feature are required to register their use cases through the [intake form](https://aka.ms/customneural).
+
+## The basics of Custom Neural Voice
+
+The underlying Neural TTS technology used for Custom Neural Voice
+consists of three major components: Text Analyzer, Neural Acoustic
+Model, and Neural Vocoder. To generate natural synthetic speech from
+text, text is first input into Text Analyzer, which provides output in
+the form of phoneme sequence. A phoneme is a basic unit of sound that
+distinguishes one word from another in a particular language. A sequence
+of phonemes defines the pronunciations of the words provided in the
+text.
+
+Next, the phoneme sequence goes into the Neural Acoustic Model to
+predict acoustic features that define speech signals, such as the
+timbre, the speaking style, speed, intonations, and stress patterns. Finally, the Neural Vocoder converts the acoustic features into audible waves so that synthetic speech is generated.
+
+![Introduction image for custom neural voice.](./media/custom-voice/cnv-intro.png)
+
+Neural TTS voice models are trained using deep neural networks based on
+the recording samples of human voices. In this
+[blog](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911),
+we describe how Neural TTS works with state-of-the-art neural speech
+synthesis models. The blog also explains how a universal base model can be adapted with less
+than 2 hours of speech data (or less than 2,000 recorded utterances)
+from a target speaker, and learn to speak in that target speakerΓÇÖs voice. To read about how a neural vocoder is trained, see the [blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
+
+With the customization capability of Custom Neural Voice, you can adapt
+the Neural TTS engine to better fit your user scenarios. To create a
+custom neural voice, use [Speech Studio](https://speech.microsoft.com/customvoice) to upload the recorded
+audio and corresponding scripts, train the model, and deploy the voice
+to a custom endpoint. Depending on the use case, Custom Neural Voice can
+be used to convert text into speech in real-time (e.g., used in a smart
+virtual assistant) or generate audio content offline (e.g., used as in
+audio book or instructions in e-learning applications) with the text
+input provided by the user. This is made available via the [REST API](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-text-to-speech), the
+[Speech SDK](https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&pivots=programming-language-csharp),
+or a [web portal](https://speech.microsoft.com/audiocontentcreation).
+
+## Terms and definitions
+
+| **Term** | **Definition** |
+|||
+| Voice model | A text-to-speech model that can mimic the unique vocal characteristics of a target speaker. A *voice model* is also known as a *voice font* or *synthetic voice*. A voice model is a set of parameters in binary format that is not human readable and does not contain audio recordings. It cannot be reverse engineered to derive or construct the audio of a human voice. |
+| Voice talent | Individuals or target speakers whose voices are recorded and used to create voice models that are intended to sound like the voice talentΓÇÖs voice. |
+| Standard TTS | The standard, or "traditional," method of TTS that breaks down spoken language into phonetic snippets so that they can be remixed and matched using classical programming or statistical methods. |
+| Neural TTS | Neural TTS synthesizes speech using deep neural networks that have "learned" the way phonetics are combined in natural human speech, rather than using procedural programming or statistical methods. In addition to the recordings of a target voice talent, Neural TTS uses a source library/base model that is built with voice recordings from many different speakers. |
+| Training data | A custom neural voice training dataset that includes the audio recordings of the voice talent, and the associated text transcriptions. |
+| Persona | A persona describes who you want this voice to be. A good persona design will inform all voice creation whether itΓÇÖs choosing an available voice model already created, or starting from scratch by casting and recording a new voice talent. |
+| Script | A script is a text file that contains the utterances to be spoken by your voice talent. (The term "*utterances*" encompasses both full sentences and shorter phrases.) |
+
+## Responsible use of AI
+
+To learn how to use Custom Neural Voice responsibly, see the [transparency note](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context). MicrosoftΓÇÖs transparency notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment.
+
+## Next steps
+
+* [Get started with Custom Voice](how-to-custom-voice.md)
+* [Create and use a Custom Voice endpoint](how-to-custom-voice-create-voice.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice-create-voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
@@ -47,17 +47,22 @@ The following table shows the processing states for imported datasets:
After validation is complete, you can see the total number of matched utterances for each of your datasets in the **Utterances** column. If the data type you have selected requires long-audio segmentation, this column only reflects the utterances we have segmented for you either based on your transcripts or through the speech transcription service. You can further download the dataset validated to view the detail results of the utterances successfully imported and their mapping transcripts. Hint: long-audio segmentation can take more than an hour to complete data processing.
-For en-US and zh-CN datasets, you can further download a report to check the pronunciation scores and the noise level for each of your recordings. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and impact the generated digital voice.
+In the data detail view, you can further check the pronunciation scores and the noise level for each of your datasets. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and impact the generated digital voice.
A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 50+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice. Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, you might exclude those utterances from your dataset.
+> [!NOTE]
+> It is required that if you are using Custom Neural Voice, you must register your voice talent in the **Voice Talent** tab. When preparing your recording script, make sure you include the below sentence to acquire the voice talent acknowledgement of using their voice data to create a TTS voice model and generate synthetic speech.
+ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
+This sentence will be used to verify if the recordings in your training datasets are done by the same person that makes the consent. [Read more about how your data will be processed and how voice talent verification is done here](https://aka.ms/CNV-data-privacy).
+ ## Build your custom voice model After your dataset has been validated, you can use it to build your custom voice model.
-1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Training**.
+1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Model**.
2. Click **Train model**.
@@ -67,15 +72,22 @@ After your dataset has been validated, you can use it to build your custom voice
A common use of the **Description** field is to record the names of the datasets that were used to create the model.
-4. From the **Select training data** page, choose one or multiple datasets that you would like to use for training. Check the number of utterances before you submit them. You can start with any number of utterances for en-US and zh-CN voice models. For other locales, you must select more than 2,000 utterances to be able to train a voice.
+4. From the **Select training data** page, choose one or multiple datasets that you would like to use for training. Check the number of utterances before you submit them. You can start with any number of utterances for en-US and zh-CN voice models using the "Adaptive" training method. For other locales, you must select more than 2,000 utterances to be able to train a voice using a standard tier including the "Statistical parametric" and "Concatenative" training methods, and more than 300 utterances to train a custom neural voice.
> [!NOTE] > Duplicate audio names will be removed from the training. Make sure the datasets you select do not contain the same audio names across multiple .zip files. > [!TIP]
- > Using the datasets from the same speaker is required for quality results. When the datasets you have submitted for training contain a total number of less than 6,000 distinct utterances, you will train your voice model through the Statistical Parametric Synthesis technique. In the case where your training data exceeds a total number of 6,000 distinct utterances, you will kick off a training process with the Concatenation Synthesis technique. Normally the concatenation technology can result in more natural, and higher-fidelity voice results. [Contact the Custom Voice team](https://go.microsoft.com/fwlink/?linkid=2108737) if you want to train a model with the latest Neural TTS technology that can produce a digital voice equivalent to the publicly available [neural voices](language-support.md#neural-voices).
+ > Using the datasets from the same speaker is required for quality results. Different training methods require different training data size. To train a model with the "Statistical parametric" method, at least 2,000 distinct utterances are required. For the "Concatenative" method, it's 6,000 utterances, while for "Neural", the minimum data size requirement is 300 utterances.
-5. Click **Train** to begin creating your voice model.
+5. Select the **training method** in the next step.
+
+ > [!NOTE]
+ > If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
+
+ On this page you can also select to upload your script for testing. The testing script must be a txt file, less than 1Mb. Supported encoding format includes ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. Each paragraph of the utterance will result in a separate audio. If you want to combine all sentences into one audio, make them in one paragraph.
+
+6. Click **Train** to begin creating your voice model.
The Training table displays a new entry that corresponds to this newly created model. The table also displays the status: Processing, Succeeded, Failed.
@@ -87,11 +99,14 @@ The status that's shown reflects the process of converting your dataset to a voi
| Succeeded | Your voice model has been created and can be deployed. | | Failed | Your voice model has been failed in training due to many reasons, for example unseen data problems or network issues. |
-Training time varies depending on the volume of audio data processed. Typical times range from about 30 minutes for hundreds of utterances to 40 hours for 20,000 utterances. Once your model training is succeeded, you can start to test it.
+Training time varies depending on the volume of audio data processed and the training method you have selected. It can range from 30 minutes to 40 hours. Once your model training is succeeded, you can start to test it.
> [!NOTE] > Free subscription (F0) users can train one voice font simultaneously. Standard subscription (S0) users can train three voices simultaneously. If you reach the limit, wait until at least one of your voice fonts finishes training, and then try again.
+> [!NOTE]
+> Training of custom neural voices is not free. Check the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) here.
+ > [!NOTE] > The maximum number of voice models allowed to be trained per subscription is 10 models for free subscription (F0) users and 100 for standard subscription (S0) users.
@@ -99,33 +114,28 @@ If you are using the neural voice training capability, you can select to train a
## Test your voice model
-After your voice font is successfully built, you can test it before deploying it for use.
+Each training will generate 100 sample audio files automatically to help you test the model. After your voice model is successfully built, you can test it before deploying it for use.
-1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Testing**.
+1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Model**.
-2. Click **Add test**.
+2. Click the name of the model you would like to test.
-3. Select one or multiple models that you would like to test.
+3. On the model detail page, you can find the sample audio files under the **Testing** tab.
-4. Provide the text you want the voice(s) to speak. If you have selected to test multiple models at one time, the same text will be used for the testing for different models.
-
- > [!NOTE]
- > The language of your text must be the same as the language of your voice font. Only successfully trained models can be tested. Only plain text is supported in this step.
-
-5. Click **Create**.
-
-Once you have submitted your test request, you will return to the test page. The table now includes an entry that corresponds to your new request and the status column. It can take a few minutes to synthesize speech. When the status column says **Succeeded**, you can play the audio, or download the text input (a .txt file) and audio output (a .wav file), and further audition the latter for quality.
-
-You can also find the test results in the detail page of each models you have selected for testing. Go to the **Training** tab, and click the model name to enter the model detail page.
+The quality of the voice depends on a number of factors, including the size of the training data, the quality of the recording, the accuracy of the transcript file, how well the recorded voice in the training data matches the personality of the designed voice for your intended use case, and more. [Check here to learn more about the capabilities and limits of our technology and the best practice to improve your model quality](https://aka.ms/CNV-limits).
## Create and use a custom voice endpoint After you've successfully created and tested your voice model, you deploy it in a custom Text-to-Speech endpoint. You then use this endpoint in place of the usual endpoint when making Text-to-Speech requests through the REST API. Your custom endpoint can be called only by the subscription that you have used to deploy the font.
-To create a new custom voice endpoint, go to **Text-to-Speech > Custom Voice > Deployment**. Select **Add endpoint** and enter a **Name** and **Description** for your custom endpoint. Then select the custom voice model you would like to associate with this endpoint.
+To create a new custom voice endpoint, go to **Text-to-Speech > Custom Voice > Endpoint**. Select **Add endpoint** and enter a **Name** and **Description** for your custom endpoint. Then select the custom voice model you would like to associate with this endpoint.
After you have clicked the **Add** button, in the endpoint table, you will see an entry for your new endpoint. It may take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
+You can **Suspend** and **Resume** your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL will be kept the same so you don't need to change your code in your apps.
+
+You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one your want to update.
+ > [!NOTE] > Free subscription (F0) users can have only one model deployed. Standard subscription (S0) users can create up to 50 endpoints, each with its own custom voice.
@@ -142,4 +152,4 @@ The custom endpoint is functionally identical to the standard endpoint that's us
* [Guide: Record your voice samples](record-custom-voice-samples.md) * [Text-to-Speech API reference](rest-text-to-speech.md)
-* [Long Audio API](long-audio-api.md)
+* [Long Audio API](long-audio-api.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
@@ -17,7 +17,15 @@
When you're ready to create a custom text-to-speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
-You can start with a small amount of data to create a proof of concept. However, the more data that you provide, the more natural your custom voice will sound. Before you can train your own text-to-speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they are used, and how to manage each.
+Before you can train your own text-to-speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they are used, and how to manage each.
+
+> [!NOTE]
+> If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. When preparing your recording script, make sure you include the below sentence.
+
+> ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
+This sentence will be used to verify if the training data is done by the same person that makes the consent. Read more about the [voice talent verification](https://aka.ms/CNV-data-privacy) here.
+
+> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
## Data types
@@ -27,22 +35,22 @@ In some cases, you may not have the right dataset ready and will want to test th
This table lists data types and how each is used to create a custom text-to-speech voice model.
-| Data type | Description | When to use | Additional service required | Quantity for training a model | Locale(s) |
-| | -- | -- | | -- | |
-| **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. | No hard requirement for en-US and zh-CN. More than 2,000+ distinct utterances for other locales. | [All Custom Voice locales](language-support.md#customization) |
-| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a transcript (.txt) that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. | No hard requirement | [All Custom Voice locales](language-support.md#customization) |
-| **Audio only (beta)** | A collection (.zip) of audio files without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.| No hard requirement | [All Custom Voice locales](language-support.md#customization) |
+| Data type | Description | When to use | Additional processing required |
+| | -- | -- | |
+| **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
+| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a transcript (.txt) that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. |
+| **Audio only (beta)** | A collection (.zip) of audio files without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.|
Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type. > [!NOTE]
-> The maximum number of datasets allowed to be imported per subscription is 10 .zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
+> The maximum number of datasets allowed to be imported per subscription is 10 zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
## Individual utterances + matching transcript You can prepare recordings of individual utterances and the matching transcript in two ways. Either write a script and have it read by a voice talent or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
-To produce a good voice font, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
+To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
> [!TIP] > To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [How to record voice samples for a custom voice](record-custom-voice-samples.md).
@@ -86,9 +94,6 @@ Below is an example of how the transcripts are organized utterance by utterance
``` ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
-> [!TIP]
-> When building production text-to-speech voices, select utterances (or write scripts) that take into account both phonetic coverage and efficiency. Having trouble getting the results you want? [Contact the Custom Voice](mailto:speechsupport@microsoft.com) team to find out more about having us consult.
- ## Long audio + transcript (beta) In some cases, you may not have segmented audio available. We provide a service (beta) through the custom voice portal to help you segment long audio files and create transcriptions. Keep in mind, this service will be charged toward your speech-to-text subscription usage.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
@@ -34,10 +34,11 @@ The diagram below highlights the steps to create a custom voice model using the
## Custom Neural voices
-The neural voice customization capability is currently in public preview, limited to selected customers. Fill out this [application form](https://go.microsoft.com/fwlink/?linkid=2108737) to get started.
+Custom Voice currently supports both standard and neural tiers. Custom Neural Voice empowers users to build higher quality voice models while requiring less data, and provides measures to help you deploy AI responsibly. We recommend you should use Custom Neural Voice to develop more realistic voices for more natural conversational interfaces and enable your customers and end users to benefit from the latest Text-to-Speech technology, in a responsible way. [Learn more about Custom Neural Voice](https://aka.ms/CNV-Transparency-Note).
> [!NOTE]
-> As part of Microsoft's commitment to designing responsible AI, our intent is to protect the rights of individuals and society, and foster transparent human-computer interactions. For this reason, Custom Neural Voice is not generally available to all customers. You may gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our ethics principles. Learn more about our [application gating process](./concepts-gating-overview.md).
+> As part of Microsoft's commitment to designing responsible AI, we have limited the use of Custom Neural Voice. You may gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our responsible AI principles. Learn more about our [policy on the limit access](https://aka.ms/gating-overview) and [apply here](https://aka.ms/customneural).
+> The [languages](language-support.md#customization) and [regions](regions.md#custom-voices) supported for the standard and neural version of Custom Voice are different. Check the details before you start.
## Set up your Azure account
@@ -51,7 +52,7 @@ Once you've created an Azure account and a Speech service subscription, you'll n
4. If you'd like to switch to another Speech subscription, use the cog icon located in the top navigation. > [!NOTE]
-> You must have a F0 or a S0 key created in Azure before you can use the service.
+> You must have a F0 or a S0 Speech service key created in Azure before you can use the service. Custom Neural Voice only supports the S0 tier.
## How to create a project
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
@@ -387,10 +387,30 @@ More than 75 standard voices are available in over 45 languages and locales, whi
### Customization
-Voice customization is available for `de-DE`, `en-GB`, `en-IN`, `en-US`, `es-MX`, `fr-FR`, `it-IT`, `pt-BR`, and `zh-CN`. Select the right locale that matches the training data you have to train a custom voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
+Custom Voice is available in the standard and the neural tier. The languages supported are different for these two tiers.
+
+| Language | Locale | Standard | Neural |
+|--|--|--|--|
+| Chinese (Mandarin, Simplified) | `zh-CN` | Yes | Yes |
+| Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes | Yes |
+| English (Australia) | `en-AU` | No | Yes |
+| English (India) | `en-IN` | Yes | Yes |
+| English (United Kingdom) | `en-GB` | Yes | Yes |
+| English (United States) | `en-US` | Yes | Yes |
+| French (Canada) | `fr-CA` | No | Yes |
+| French (France) | `fr-FR` | Yes | Yes |
+| German (Germany) | `de-DE` | Yes | Yes |
+| Italian (Italy) | `it-IT` | Yes | Yes |
+| Japanese (Japan) | `ja-JP` | No | Yes |
+| Korean (Korea) | `ko-KR` | No | Yes |
+| Portuguese (Brazil) | `pt-BR` | Yes | Yes |
+| Spanish (Mexico) | `es-MX` | Yes | Yes |
+| Spanish (Spain) | `es-ES` | No | Yes |
+
+Select the right locale that matches the training data you have to train a custom voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
> [!NOTE]
-> We do not support bi-lingual model training in Custom Voice, except for the Chinese-English bi-lingual. Select "Chinese-English bilingual" if you want to train a Chinese voice that can speak English as well. Voice training in all locales starts with a data set of 2,000+ utterances, except for the `en-US` and `zh-CN` where you can start with any size of training data.
+> We do not support bi-lingual model training in Custom Voice, except for the Chinese-English bi-lingual. Select "Chinese-English bilingual" if you want to train a Chinese voice that can speak English as well. Chinese-English bilingual model training using the standard method is available in North Europe and North Central US only. Custom Neural Voice training is available in UK South and East US.
## Speech translation
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/record-custom-voice-samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
@@ -20,6 +20,14 @@ Before you can make these recordings, though, you need a script: the words that
Many small but important details go into creating a professional voice recording. This guide is a roadmap for a process that will help you get good, consistent results.
+> [!NOTE]
+> If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. When preparing your recording script, make sure you include the below sentence.
+
+> ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
+This sentence will be used to verify if the training data is done by the same person that makes the consent. Read more about the [voice talent verification](https://aka.ms/CNV-data-privacy) here.
+
+> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
+ > [!TIP] > For the highest quality results, consider engaging Microsoft to help develop your custom voice. Microsoft has extensive experience producing high-quality voices for its own products, including Cortana and Office.
@@ -51,7 +59,7 @@ Your voice talent is the other half of the equation. They must be able to speak
Recording custom voice samples can be more fatiguing than other kinds of voice work. Most voice talent can record for two or three hours a day. Limit sessions to three or four a week, with a day off in-between if possible.
-Recordings made for a voice model should be emotionally neutral. That is, a sad utterance should not be read in a sad way. Mood can be added to the synthesized speech later through prosody controls. Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom voice. In the process, you'll pinpoint what "neutral" sounds like for that persona.
+Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom voice. In the process, you'll pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotions. Define the "speaking styles" and ask your voice talent to read the script in a way that resonate the styles you want.
A persona might have, for example, a naturally upbeat personality. So "their" voice might carry a note of optimism even when they speak neutrally. However, such a personality trait should be subtle and consistent. Listen to readings by existing voices to get an idea of what you're aiming for.
@@ -206,7 +214,7 @@ Listen to each file carefully. At this stage, you can edit out small unwanted so
Convert each file to 16 bits and a sample rate of 16 kHz before saving and, if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
-Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Creating custom voice fonts](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
+Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Creating custom voices](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
Archive the original recordings in a safe place in case you need them later. Preserve your script and notes, too.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/rest-text-to-speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
@@ -55,9 +55,11 @@ The `voices/list` endpoint allows you to get a full list of voices for a specifi
| Korea Central | `https://koreacentral.tts.speech.microsoft.com/cognitiveservices/voices/list` | | North Central US | `https://northcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | North Europe | `https://northeurope.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| South Africa North | `https://southafricanorth.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| South Central US | `https://southcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | Southeast Asia | `https://southeastasia.tts.speech.microsoft.com/cognitiveservices/voices/list` | | UK South | `https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| West Central US | `https://westcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| West Europe | `https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US | `https://westus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US 2 | `https://westus2.tts.speech.microsoft.com/cognitiveservices/voices/list` |
@@ -273,5 +275,5 @@ If the HTTP status is `200 OK`, the body of the response contains an audio file
## Next steps - [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)-- [Asynchronous synthesis for long-form audio](./long-audio-api.md)
+- [Asynchronous synthesis for long-form audio](quickstarts/text-to-speech/async-synthesis-long-form-audio.md)
- [Get started with Custom Voice](how-to-custom-voice.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/whats-new.md
@@ -1,80 +0,0 @@
- Title: What's new in docs - Speech service-
-description: Learn about documentation updates for the Azure Speech service.
------ Previously updated : 11/06/2020---
-# Speech service: what's new in docs
-
-Welcome! This page covers what's new in Speech service docs. Check back every month for information on service changes, doc additions and updates this month.
-
-### Service updates
-
-If you want to learn about updates to the Speech service, Speech SDK, Speech Devices SDK, Speech CLI, or Speech containers see:
-* [Speech SDK release notes](releasenotes.md).
-* [Speech Devices SDK release notes](devices-sdk-release-notes.md)
-* [Cognitive Services container release notes](../containers/container-image-tags.md)
-
-## May 2020
-
-### New articles
-
-* [Improve a model for Custom Speech](./how-to-custom-speech-evaluate-data.md)
-
-### Updated articles
-
-* [About the Speech SDK audio input stream API](how-to-use-audio-input-streams.md) - Updated allowed samples per second.
-* [Automatic language detection for speech-to-text](how-to-automatic-language-detection.md) - Added Objective-C instructions to documentation.
-* [Choose a speech recognition mode](./get-started-speech-to-text.md) - Added JavaScript instructions to documentation.
-* [Configure RHEL/CentOS 7 for Speech SDK](how-to-configure-rhel-centos-7.md) - Improved setup instructions.
-* [Phrase Lists for speech-to-text](./get-started-speech-to-text.md) - Added JavaScript instructions to documentation.
-* [Quickstart: Asynchronous synthesis for long-form audio in Python (Preview)](./long-audio-api.md) - Updated with support for public neural voices, and associated parameters.
-* [Quickstart: Recognize speech from an audio file](./get-started-speech-to-text.md) - Added JavaScript instructions to documentation.
-* [Quickstart: Recognize speech from a microphone](./get-started-speech-to-text.md) - Added Go and JavaScript instructions to documentation.
-* [Quickstart: Recognize speech stored in blob storage](./batch-transcription.md) - Added JavaScript instructions to documentation.
-* [Quickstart: Recognize speech, intents, and entities with Language Understanding (LUIS)](quickstarts/intent-recognition.md)
-* [Quickstart: Setup development environment](quickstarts/setup-platform.md) - Added JavaScript instructions to documentation.
-* [Quickstart: Synthesize speech into an audio file](./get-started-text-to-speech.md) - Added JavaScript instructions to documentation.
-* [Quickstart: Synthesize speech to a speaker](./get-started-text-to-speech.md) - Added JavaScript instructions to documentation.
-* [What is a keyword?](custom-keyword-overview.md) - Updated get started content and links.
-* [Specify source language for speech-to-text](how-to-specify-source-language.md) - Added JavaScript and Objective-C instructions to documentation.
-
-### GitHub issues opened in May
-
-These issues were opened in May. This table lists the user that opened the issue, when it was opened, and it's status.
-
-This table is updated monthly and only reflects issues opened in May.
-
-|ID|User|Description|Opened|State|Type|
-| : | : | : | : | : | : |
-|[56045](https://github.com/MicrosoftDocs/azure-docs/issues/56045)|rhalaly|Activity dropped because the bot's endpoint is missing|2020-05-31|Closed|Issue|
-|[56038](https://github.com/MicrosoftDocs/azure-docs/issues/56038)|rhalaly|Wrong publishing bot steps|2020-05-31|Open|Issue|
-|[56014](https://github.com/MicrosoftDocs/azure-docs/issues/56014)|mosdav|Add more clear doc about PCM samples format|2020-05-30|Open|Issue|
-|[55984](https://github.com/MicrosoftDocs/azure-docs/issues/55984)|chschrae|Title doesn't match side bar|2020-05-29|Closed|Issue|
-|[55857](https://github.com/MicrosoftDocs/azure-docs/issues/55857)|nitinbhatia-dev|CLI error with wave file|2020-05-28|Closed|Issue|
-|[55717](https://github.com/MicrosoftDocs/azure-docs/pull/55717)|dargilco|Update speech-sdk.md|2020-05-27|Open|Pull Request|
-|[55299](https://github.com/MicrosoftDocs/azure-docs/issues/55299)|Tirumala-K|Weird error with the unsupported voice name|2020-05-20|Closed|Issue|
-|[55099](https://github.com/MicrosoftDocs/azure-docs/issues/55099)|kmoore-riphaina|The documentation on the speech to text api is poor|2020-05-18|Open|Issue|
-|[55032](https://github.com/MicrosoftDocs/azure-docs/issues/55032)|dubbySwords|Microsoft CognitiveServices Speech class SpeechRecognizer, cannot gather a resulting text|2020-05-18|Closed|Issue|
-|[55031](https://github.com/MicrosoftDocs/azure-docs/issues/55031)|dubbySwords|Not clear|2020-05-18|Closed|Issue|
-|[55027](https://github.com/MicrosoftDocs/azure-docs/issues/55027)|ovishesh|Graphics not visible in Dark theme|2020-05-17|Closed|Issue|
-|[54919](https://github.com/MicrosoftDocs/azure-docs/issues/54919)|kmoore-riphaina|missing section?|2020-05-15|Open|Issue|
-|[54743](https://github.com/MicrosoftDocs/azure-docs/issues/54743)|fifteenjoy|Running Speech service containers fail|2020-05-13|Open|Issue|
-|[54550](https://github.com/MicrosoftDocs/azure-docs/issues/54550)|manish-95|Example for Pronunciation file|2020-05-11|Open|Issue|
-|[54522](https://github.com/MicrosoftDocs/azure-docs/issues/54522)|pjmlp|Java sample is incorrect.|2020-05-10|Open|Issue|
-|[54387](https://github.com/MicrosoftDocs/azure-docs/issues/54387)|ziadhassan7|Can't get pronunciation Score|2020-05-08|Closed|Issue|
-|[54382](https://github.com/MicrosoftDocs/azure-docs/issues/54382)|jgtellez1|YAML file template|2020-05-07|Closed|Issue|
-|[54208](https://github.com/MicrosoftDocs/azure-docs/issues/54208)|paparush|C# Sample Code does not prompt user to speak.|2020-05-06|Closed|Issue|
-|[54132](https://github.com/MicrosoftDocs/azure-docs/pull/54132)|anthonsu|Upgrade TTS from v1.3 to v1.4|2020-05-05|Closed|Pull Request|
-|[54111](https://github.com/MicrosoftDocs/azure-docs/pull/54111)|anthonsu|Update custom STT latest version to v2.2.0|2020-05-05|Closed|Pull Request|
-|[53919](https://github.com/MicrosoftDocs/azure-docs/issues/53919)|eyast|Links to github projects are broken|2020-05-03|Open|Issue|
-|[53892](https://github.com/MicrosoftDocs/azure-docs/issues/53892)|viju2008|Property to define: Max Audio Recognition time for Android Microphone. Stopping Audio Recognition after 15 seconds|2020-05-02|Closed|Issue|
-|[53796](https://github.com/MicrosoftDocs/azure-docs/pull/53796)|singhsaumya|Custom Commands: docs update|2020-05-01|Closed|Pull Request|
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/includes/service-specific-updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/includes/service-specific-updates.md
@@ -17,5 +17,4 @@
* [Language Understanding (LUIS)](../LUIS/whats-new.md) * [Personalizer](../personalizer/whats-new.md) * [QnA Maker](../QnAMaker/whats-new.md)
-* [Speech service](../Speech-Service/whats-new.md)
* [Text Analytics](../text-analytics/whats-new.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-for-health.md
@@ -8,7 +8,7 @@
Previously updated : 01/21/2021 Last updated : 02/03/2021
@@ -110,6 +110,9 @@ Document size must be under 5,120 characters per document. For the maximum numbe
For both the container and hosted web API, you must create a POST request. You can [use Postman](text-analytics-how-to-call-api.md), a cURL command or the **API testing console** in the [Text Analytics for health hosted API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-3/operations/Health) to quickly construct and send a POST request to the hosted web API in your desired region.
+> [!NOTE]
+> Both the asynchronous `/analyze` and `/health` endpoints are only available in the following regions: West US 2, East US 2, Central US, North Europe and West Europe. To make successful requests to these endpoints, please make sure your resource is created in one of these regions.
+ Below is an example of a JSON file attached to the Text Analytics for health API request's POST body: ```json
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/sdk-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
@@ -42,7 +42,7 @@ Publishing locations for individual client library packages are detailed below.
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - | | Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases/tag/v1.0.0-beta.2) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
-| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | (Obj-C) ✔️ | ✔️ | - |
+| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](https://docs.microsoft.com/objectivec/communication-services/calling/) | [docs](https://docs.microsoft.com/java/api/com.azure.communication.calling?view=communication-services-java-android) | - |
## REST APIs Communication Services APIs are documented alongside other Azure REST APIs in [docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using Postman. This documentation is also offered in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
@@ -104,4 +104,4 @@ For more information, see the following client library overviews:
To get started with Azure Communication - [Create Azure Communication Resources](../quickstarts/create-communication-resource.md)-- Generate [User Access Tokens](../quickstarts/access-tokens.md)
+- Generate [User Access Tokens](../quickstarts/access-tokens.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/ui-framework/ui-sdk-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/ui-sdk-features.md
@@ -0,0 +1,85 @@
+
+ Title: Azure Communication Services UI Framework capabilities
+
+description: Learn about UI Framework capabilities
++ Last updated : 11/16/2020+++++
+# UI Framework capabilities
++
+The Azure Communication Services UI Framework lets you build communication experiences using a set of reusable components. These components come in two flavors: **Base** components are the most basic building blocks of your UI experience, while combinations of these base components are called **composite** components.
+
+## UI Framework composite components
+
+| Composite | Description | Web | Android | iOS |
+|-|--|-||-|
+| Group Calling Composite | Light-weight voice and video outbound calling experience for Azure Communication Services calling using Fluent UI design assets. Supports group calling using Azure Communication Services Group ID. The composite allows for one-to-one calling to be used by referencing an Azure Communication Services identity or a phone number for PSTN using a phone number procured through Azure. | React | | |
+| Group Chat Composite | Light-weight chat experience for Azure Communication Services using Fluent UI design assets. This experience concentrates on delivering a simple chat client that can connect to Azure Communication Services threads. It allows users to send messages and see received messages with typing indicators and read receipts. It scales from 1:1 to group chat scenarios. Supports a single chat thread. | React | | |
+
+## UI Framework base components
+
+| Component | Description | Web | Android | iOS |
+|--||-||--|
+| Calling Provider | Core initializing component for calling. Required component to then initialize other components on top of it. Handles core logic to initialize the calling client using Azure Communication Services access tokens. Supports Group join. | React | N/A | N/A |
+| Media Controls | Allows users to manage the current call by toggling mute, turning video on/off and end the call. | React | N/A | N/A |
+| Media Gallery | Showcase all call participants in a single gallery. Gallery supports both video-enabled and static participants. For video-enabled participants, video is rendered. | React | N/A | N/A |
+| Microphone Settings | Pick the microphone to be used for calling. This control can be used before and during a call to select the microphone device. | React | N/A | N/A |
+| Camera Settings | Pick the camera to be used for video calling. This control can be used before and during a call to select the video device. | React | N/A | N/A |
+| Device Settings | Combines microphone and camera settings into a single component | React | N/A | N/A |
+| Chat Provider | Core initializing component for chat. Required component to then initialize other components on top of it. Handles core logic to initialize the chat client with an Azure Communication Services access token and the thread that it will join. | React | N/A | N/A |
+| Send Box | Input component that allows users to send messages to the chat thread. Input supports text, hyperlinks, emojis and other Unicode characters including other alphabets. | React | N/A | N/A |
+| Chat Thread | Thread component that shows the user both received and sent messages with their sender information. The thread supports typing indicators and read receipts. You can scroll these threads to review chat history.
+| Participant List | Show all the participants of the call or chat thread as a list. | React | N/A | N/A |
+
+## UI Framework capabilities
+
+| Feature | Group Calling Composite | Group Chat Composite | Base Components |
+||-|-|--|
+| Join Teams Meeting | | |
+| Join Teams Live Event | | |
+| Start VoIP call to Teams user | | |
+| Join a Teams Meeting Chat | | |
+| Join Azure Communication Services call with Group Id | Γ£ö | | Γ£ö
+| Start a VoIP call to one or more Azure communication Services users | | |
+| Join an Azure Communication Services chat thread | | Γ£ö | Γ£ö
+| Mute/unmute call | Γ£ö | | Γ£ö
+| Video on/off on call | Γ£ö | | Γ£ö
+| Screen Sharing | Γ£ö | | Γ£ö
+| Participant gallery | Γ£ö | | Γ£ö
+| Microphone management | Γ£ö | | Γ£ö
+| Camera management | Γ£ö | | Γ£ö
+| Call Lobby | | | Γ£ö
+| Send chat message | | Γ£ö |
+| Receive chat message | | Γ£ö | Γ£ö
+| Typing Indicators | | Γ£ö | Γ£ö
+| Read Receipt | | Γ£ö | Γ£ö
+| Participant List | | | Γ£ö
++
+## Customization support
+
+| Component Type | Themes | Layout | Data Models |
+||||-|
+| Composite Component | N/A | N/A | N/A |
+| Base Component | N/A | Layout of components can be modified using external styling | N/A |
++
+## Platform support
+
+| SDK | Windows | macOS | Ubuntu | Linux | Android | iOS |
+|--|--|-|-|-|-||
+| UI SDK | Chrome\*, new Edge | Chrome\*, Safari\*\* | Chrome\* | Chrome\* | Chrome\* | Safari\*\* |
+
+\*Note that the latest version of Chrome is supported in addition to the
+previous two releases.
+
+\*\*Note that Safari versions 13.1+ are supported. Outgoing video for Safari
+macOS is not yet supported, but it is supported on iOS. Outgoing screen sharing
+is only supported on desktop iOS.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/ui-framework/ui-sdk-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/ui-sdk-overview.md
@@ -0,0 +1,86 @@
+
+ Title: Azure Communication Services UI Framework overview
+
+description: Learn about Azure Communication Services UI Framework
++ Last updated : 11/16/2020+++++
+# Azure Communication Services UI Framework
+++
+Azure Communication Services UI Framework makes it easy for you to build modern communications user experiences. It gives you a library of production-ready UI components that you can drop into your applications:
+
+- **Composite Components** - These components are turn-key solutions that implement common communication scenarios. You can quickly add video calling or chat experiences to their applications. Composites are open-source components built using base components.
+- **Base Components** - These components are open-source building blocks that let you build custom communications experience. Components are offered for both calling and chat capabilities that can be combined to build experiences.
+
+These UI client libraries all use [Microsoft's Fluent design language](https://developer.microsoft.com/fluentui/) and assets. Fluent UI provides a foundational layer for the UI Framework that has been battle tested across Microsoft products.
+
+## **Differentiating Components and Composites**
+
+**Base Components** are built on top of core Azure Communication Services client libraries and implement basic actions such as initializing the core client libraries, rendering video, and providing user controls for muting, video on/off, etc. You can use these **Base Components** to build your own custom layout experiences using pre-built, production ready communication components.
++
+**Composite Components** combine multiple **Base Components** to create more complete communication experiences. These higher-level components can be easily integrated into an existing app to drop a fully fledge communication experience without the task of building it from scratch. Developers can concentrate on building the surrounding experience and flow desired into their apps and leave the communications complexity to Composite Components.
++
+## What UI Framework is best for my project?
+
+Understanding these requirements will help you choose the right client library:
+
+- **How much customization do you desire?** Azure Communication core client libraries don't have a UX and are designed so you can build whatever UX you want. UI Framework components provide UI assets at the cost of reduced customization.
+- **Do you require Meeting features?** The Meeting system has several unique capabilities not currently available in the core Azure Communication Services client libraries, such as blurred background and raised hand.
+- **What platforms are you targeting?** Different platforms have different capabilities.
+
+Details about feature availability in the varied [UI SDKs is available here](ui-sdk-features.md), but key trade-offs are summarized below.
+
+|Client library / SDK|Implementation Complexity| Customization Ability| Calling| Chat| [Teams Interop](./../voice-video-calling/teams-interop.md)
+||||||||
+|Composite Components|Low|Low|Γ£ö|Γ£ö|Γ£ò
+|Base Components|Medium|Medium|Γ£ö|Γ£ö|Γ£ò
+|Core client libraries|High|High|Γ£ö|Γ£ö |Γ£ö
+
+## Cost
+
+Usage of Azure UI Frameworks does not have any extra Azure cost or metering. You only pay for the
+usage of the underlying service, using the same Calling, Chat, and PSTN meters.
+
+## Supported use cases
+
+Calling:
+
+- Join Azure Communication Services call with Group ID
+
+Chat:
+
+- Join Azure Communication Services chat with Thread ID
+
+## Supported identities:
+
+An Azure Communication Services identity is required to initialize the UI Framework and authenticate to the service. For more information on authentication, see [Authentication](../authentication.md) and [Access Tokens](../../quickstarts/access-tokens.md)
++
+## Recommended architecture
++
+Composite and Base Components are initialized using an Azure Communication Services access token. Access tokens should be procured from Azure Communication Services through a
+trusted service that you manage. See [Quickstart: Create Access Tokens](../../quickstarts/access-tokens.md) and [Trusted Service Tutorial](../../tutorials/trusted-service-tutorial.md) for more information.
+
+These client libraries also require the context for the call or chat they will join. Similar to user access tokens, this context should be disseminated to clients via your own trusted service. The list below summarizes the initialization and resource management functions that you need to operationalize.
+
+| Contoso Responsibilities | UI Framework Responsibilities |
+|-|--|
+| Provide access token from Azure | Pass through given access token to initialize components |
+| Provide refresh function | Refresh access token using developer provided function |
+| Retrieve/Pass join information for call or chat | Pass through call and chat information to initialize components |
+| Retrieve/Pass user information for any custom data model | Pass through custom data model to components to render |
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/ui-framework/create-your-own-components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/create-your-own-components.md
@@ -0,0 +1,187 @@
+
+ Title: Create your own UI Framework component
+
+description: In this quickstart, you'll learn how to build a custom component compatible with the UI Framework
++ Last updated : 11/16/2020+++++
+# Quickstart: Create your Own UI Framework Component
++
+Get started with Azure Communication Services by using the UI Framework to quickly integrate communication experiences into your applications.
+
+In this quickstart, you'll learn how create your own components using the pre-defined state interface offered by UI Framework. This approach is ideal for developers who need more customization and want to use their own design assets for the experience.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Node.js](https://nodejs.org/) Active LTS and Maintenance LTS versions (Node 12 Recommended).
+- An active Communication Services resource. [Create a Communication Services resource](./../create-communication-resource.md).
+- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](./../access-tokens.md).
+
+UI Framework requires a React environment to be set up. Next we will do that. If you already have a React App, you can skip this section.
+
+### Set Up React App
+
+We'll use the create-react-app template for this quickstart. For more information, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html)
+
+```console
+
+npx create-react-app my-app
+
+cd my-app
+
+```
+
+At the end of this process you should have a full application inside of the folder `my-app`. For this quickstart, we'll be modifying files inside of the `src` folder.
+
+### Install the package
+
+Use the `npm install` command to install the Azure Communication Services Calling client library for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
+
+```console
+
+//For Private Preview install tarball
+
+npm install --save ./{path for tarball}
+
+```
+
+The `--save` option lists the library as a dependency in your **package.json** file.
+
+### Run Create React App
+
+Let's test the Create React App installation by running:
+
+```console
+
+npm run start
+
+```
+
+## Object model
+
+The following classes and interfaces handle some of the major features of the Azure Communication Services UI client library:
+
+| Name | Description |
+| - | |
+| Provider| Fluent UI provider that allows developers to modify underlying Fluent UI components|
+| CallingProvider| Calling Provider to instantiate a call. Required to add base components|
+| ChatProvider | Chat Provider to instantiate a chat thread. Required to add base components|
+| connectFuncsToContext | Method to connect UI Framework components with underlying providers using mappers |
+| MapToChatMessageProps | Chat message data layer mapper which provides components with chat message props |
++
+## Initialize Chat Providers using Azure Communication Services credentials
+
+For this quickstart we will use chat as an example, for more information on calling, see [Base Components Quickstart](./get-started-with-components.md) and [Composite Components Quickstart](./get-started-with-composites.md).
+
+Go to the `src` folder inside of `my-app` and look for the file `app.js`. Here we'll drop the following code to initialize our Chat Provider. This provider is in charge of maintaining the context of the call and chat experiences. To initialize the components, you'll need an access token retrieved from Azure Communication Services. For details on how to do get an access token, see: [create and manage access tokens](./../access-tokens.md).
+
+UI Framework Components follow the same general architecture for the rest of the service. The components don't generate access tokens, group IDs, or thread IDs. These elements come from services that go through the proper steps to generate these IDs and pass them to the client application. For more information, see [Client Server Architecture](./../../concepts/client-and-server-architecture.md).
+
+`App.js`
+```javascript
+
+import {CallingProvider, ChatProvider} from "@azure/acs-ui-sdk"
+
+function App(props) {
+
+ return (
+ <ChatProvider
+ token={/*Insert the Azure Communication Services access token*/}
+ userId={/*Insert the Azure Communication Services user id*/}
+ displayName={/*Insert Display Name to be used for the user*/}
+ threadId={/*Insert id for group chat thread to be joined*/}
+ endpointUrl={/*Insert the environment URL for the Azure Resource used*/}
+ refreshTokenCallback={/*Optional, Insert refresh token call back function*/}
+ >
+ // Add Chat Components Here
+ </ChatProvider>
+ );
+}
+
+export default App;
+
+```
+
+Once initialized, this provider lets you build your own layout using UI Framework Component and any extra layout logic. The provider takes care of initializing all the underlying logic and properly connecting the different components together. Next we'll create a custom component using UI Framework mappers to connect to our chat provider.
++
+## Create a custom component using mappers
+
+We will start by creating a new file called `SimpleChatThread.js` where we will create the component. We will start by importing the UI Framework components we will need. Here, we will use out of the box html and react to create a fully custom component for a simple chat thread. Using the `connectFuncsToContext` method, we will use the `MapToChatMessageProps` mapper to map props to `SimpleChatThread` custom components. These props will give us access to the chat messages being sent and received to populate them onto our simple thread.
+
+`SimpleChatThread.js`
+```javascript
+
+import {connectFuncsToContext, MapToChatMessageProps} from "@azure/acs-ui-sdk"
+
+function SimpleChatThread(props) {
+
+ return (
+ <div>
+ {props.chatMessages?.map((message) => (
+ <div key={message.id ?? message.clientMessageId}> {`${message.senderDisplayName}: ${message.content}`}</div>
+ ))}
+ </div>
+ );
+}
+
+export default connectFuncsToContext(SimpleChatThread, MapToChatMessageProps);
+
+```
+
+## Add your custom component to your application
+
+Now that we have our custom component ready, we will import it and add it to our layout.
+
+```javascript
+
+import {CallingProvider, ChatProvider} from "@azure/acs-ui-sdk"
+import SimpleChatThread from "./SimpleChatThread"
+
+function App(props) {
+
+ return (
+ <ChatProvider ... >
+ <SimpleChatThread />
+ </ChatProvider>
+ );
+}
+
+export default App;
+
+```
+
+## Run quickstart
+
+To run the code above, use the command:
+
+```console
+
+npm run start
+
+```
+
+To fully test the capabilities, you will need a second client with chat functionality to send messages that will be received by our Simple Chat Thread. See our [Calling Hero Sample](./../../samples/calling-hero-sample.md) and [Chat Hero Sample](./../../samples/chat-hero-sample.md) as potential options.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try UI Framework Composite Components](./get-started-with-composites.md)
+
+For more information, see the following resources:
+- [UI Framework Overview](../../concepts/ui-framework/ui-sdk-overview.md)
+- [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md)
+- [UI Framework Base Components Quickstart](./get-started-with-components.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/ui-framework/get-started-with-components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/get-started-with-components.md
@@ -0,0 +1,247 @@
+
+ Title: Get started with Azure Communication Services UI Framework base components
+
+description: In this quickstart, you'll learn how to get started with UI Framework base components
++ Last updated : 11/16/2020+++++
+# Quickstart: Get started with UI Framework Base Components
++
+Get started with Azure Communication Services by using the UI Framework to quickly integrate communication experiences into your applications. In this quickstart, you'll learn how integrate UI Framework base components into your application to build communication experiences.
+
+UI Framework components come in two flavors: Base and Composite.
+
+- **Base components** represent discrete communication capabilities; they're the basic building blocks that can be used to build complex communication experiences.
+- **Composite components** are turn-key experiences for common communication scenarios that have been built using **base components** as building blocks and packaged to be easily integrated into applications.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Node.js](https://nodejs.org/) Active LTS and Maintenance LTS versions (Node 12 Recommended).
+- An active Communication Services resource. [Create a Communication Services resource](./../create-communication-resource.md).
+- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](./../access-tokens.md).
+
+## Setting up
+
+UI Framework requires a React environment to be setup. Next we will do that. If you already have a React App, you can skip this section.
+
+### Set Up React App
+
+We'll use the create-react-app template for this quickstart. For more information, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html)
+
+```console
+
+npx create-react-app my-app
+
+cd my-app
+
+```
+
+At the end of this process, you should have a full application inside of the folder `my-app`. For this quickstart, we'll be modifying files inside of the `src` folder.
+
+### Install the package
+
+Use the `npm install` command to install the Azure Communication Services Calling client library for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
+
+```console
+
+//For Private Preview install tarball
+
+npm install --save ./{path for tarball}
+
+```
+
+The `--save` option lists the library as a dependency in your **package.json** file.
+
+### Run Create React App
+
+Let's test the Create React App installation by running:
+
+```console
+
+npm run start
+
+```
+
+## Object model
+
+The following classes and interfaces handle some of the major features of the Azure Communication Services UI client library:
+
+| Name | Description |
+| - | |
+| Provider| Fluent UI provider that allows developers to modify underlying Fluent UI components|
+| CallingProvider| Calling Provider to instantiate a call. Required to add extra components|
+| ChatProvider | Chat Provider to instantiate a chat thread. Required to add extra components|
+| MediaGallery | Base component that shows call participants and their remote video streams |
+| MediaControls | Base component to control call including mute, video, share screen |
+| ChatThread | Base component that renders a chat thread with typing indicators, read receipts, etc. |
+| SendBox | Base component that allows user to input messages that will be sent to the joined thread|
+
+## Initialize Calling and Chat Providers using Azure Communication Services credentials
+
+Go to the `src` folder inside of `my-app` and look for the file `app.js`. Here we'll drop the following code to initialize our Calling and Chat providers. These providers are responsible for maintaining the context of the call and chat experiences. You can choose which one to use depending on the type of communication experience you're building. If needed, you can use both at the same time. To initialize the components, you'll need an access token retrieved from Azure Communication Services. For details on how to get access tokens, see: [create and manage access tokens](./../access-tokens.md).
+
+> [!NOTE]
+> The components don't generate access tokens, group IDs, or thread IDs. These elements come from services that go through the proper steps to generate these IDs and pass them to the client application. For more information, see: [Client Server Architecture](./../../concepts/client-and-server-architecture.md).
+>
+> For Example: The Chat Provider expects that the `userId` associated to the `token` being used to initialize it has already been joined to the `threadId` being provided. If the token hasn't been joined to the thread ID, then the Chat Provider will fail. For more information on chat, see: [Getting Started with Chat](./../chat/get-started.md)
+
+We'll use a Fluent UI theme to enhance the look and feel of the application:
+
+`App.js`
+```javascript
+
+import {CallingProvider, ChatProvider} from "@azure/acs-ui-sdk"
+import { mergeThemes, teamsTheme } from '@fluentui/react-northstar';
+import { Provider } from '@fluentui/react-northstar/dist/commonjs/components/Provider/Provider';
+import { svgIconStyles } from '@fluentui/react-northstar/dist/es/themes/teams/components/SvgIcon/svgIconStyles';
+import { svgIconVariables } from '@fluentui/react-northstar/dist/es/themes/teams/components/SvgIcon/svgIconVariables';
+import * as siteVariables from '@fluentui/react-northstar/dist/es/themes/teams/siteVariables';
+
+const iconTheme = {
+ componentStyles: {
+ SvgIcon: svgIconStyles
+ },
+ componentVariables: {
+ SvgIcon: svgIconVariables
+ },
+ siteVariables
+};
+
+function App(props) {
+
+ return (
+ <Provider theme={mergeThemes(iconTheme, teamsTheme)}>
+ <CallingProvider
+ displayName={/*Insert Display Name to be used for the user*/}
+ groupId={/*Insert GUID for group call to be joined*/}
+ token={/*Insert the Azure Communication Services access token*/}
+ refreshTokenCallback={/*Optional, Insert refresh token call back function*/}
+ >
+ // Add Calling Components Here
+ </CallingProvider>
+
+ {/*Note: Make sure that the userId associated to the token has been added to the provided threadId*/}
+
+ <ChatProvider
+ token={/*Insert the Azure Communication Services access token*/}
+ displayName={/*Insert Display Name to be used for the user*/}
+ threadId={/*Insert id for group chat thread to be joined*/}
+ endpointUrl={/*Insert the environment URL for the Azure Resource used*/}
+ refreshTokenCallback={/*Optional, Insert refresh token call back function*/}
+ >
+ // Add Chat Components Here
+ </ChatProvider>
+ </Provider>
+ );
+}
+
+export default App;
+
+```
+
+Once initialized, this provider lets you build your own layout using UI Framework Base Components and any extra layout logic. The provider takes care of initializing all the underlying logic and properly connecting the different components together. Next we'll use various base components provided by UI Framework to build communication experiences. You can customize the layout of these components and add any other custom components that you want to render with them.
+
+## Build UI Framework Calling Component Experiences
+
+For Calling, we'll use the `MediaGallery` and `MediaControls` Components. For more information about them, see [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md). To start, in the `src` folder, create a new file called `CallingComponents.js`. Here we'll initialize a function component that will hold our base components to then import in `app.js`. You can add extra layout and styling around the components.
+
+`CallingComponents.js`
+```javascript
+
+import {MediaGallery, MediaControls, MapToCallConfigurationProps, connectFuncsToContext} from "@azure/acs-ui-sdk"
+
+function CallingComponents(props) {
+
+ if (props.isCallInitialized) {props.joinCall()}
+
+ return (
+ <div style = {{height: '35rem', width: '30rem', float: 'left'}}>
+ <MediaGallery/>
+ <MediaControls/>
+ </div>
+ );
+}
+
+export default connectFuncsToContext(CallingComponents, MapToCallConfigurationProps);
+
+```
+
+At the bottom of this file, we exported the calling components using the `connectFuncsToContext` method from the UI Framework to connect the calling UI components to the underlying state using the mapping function `MapToCallingSetupProps`. This method yields the component having its props populated, which we then use to check state and join the call. Using the `isCallInitialized` property to check whether the `CallAgent` is ready and then we use the `joinCall` method to join in. UI Framework supports custom mapping functions to be used for scenarios where developers want to control how data is pushed to the components.
+
+## Build UI Framework Chat Component Experiences
+
+For Chat, we will use the `ChatThread` and `SendBox` components. For more information about these components, see [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md). To start, in the `src` folder, create a new file called `ChatComponents.js`. Here we'll initialize a function component that will hold our base components to then import in `app.js`.
+
+`ChatComponents.js`
+```javascript
+
+import {ChatThread, SendBox} from '@azure/acs-ui-sdk'
+
+function ChatComponents() {
+
+ return (
+ <div style = {{height: '35rem', width: '30rem', float: 'left'}}>
+ <ChatThread />
+ <SendBox />
+ </div >
+ );
+}
+
+export default ChatComponents;
+
+```
+
+## Add Calling and Chat Components to the main application
+
+Back in the `app.js` file, we will now add the components to the `CallingProvider` and `ChatProvider` like shown below.
+
+`App.js`
+```javascript
+
+import ChatComponents from './ChatComponents';
+import CallingComponents from './CallingComponents';
+
+<Provider ... >
+ <CallingProvider .... >
+ <CallingComponents/>
+ </CallingProvider>
+
+ <ChatProvider .... >
+ <ChatComponents />
+ </ChatProvider>
+</Provider>
+
+```
+
+## Run quickstart
+
+To run the code above use the command:
+
+```console
+
+npm run start
+
+```
+
+To fully test the capabilities, you will need a second client with calling and chat functionality to join the call and chat thread. See our [Calling Hero Sample](./../../samples/calling-hero-sample.md) and [Chat Hero Sample](./../../samples/chat-hero-sample.md) as potential options.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try UI Framework Composite Components](./get-started-with-composites.md)
+
+For more information, see the following resources:
+- [UI Framework Overview](../../concepts/ui-framework/ui-sdk-overview.md)
+- [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/ui-framework/get-started-with-composites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/get-started-with-composites.md
@@ -0,0 +1,159 @@
+
+ Title: Get started with Azure Communication Services UI Framework SDK composite components
+
+description: In this quickstart, you'll learn how to get started with UI Framework Composite Components
++ Last updated : 11/16/2020+++++
+# Quickstart: Get started with UI Framework Composite Components
++
+Get started with Azure Communication Services by using the UI Framework to quickly integrate communication experiences into your applications. In this quickstart, you'll learn how integrate UI Framework Composite Components into your application to build communication experiences.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Node.js](https://nodejs.org/) Active LTS and Maintenance LTS versions (Node 12 Recommended).
+- An active Communication Services resource. [Create a Communication Services resource](./../create-communication-resource.md).
+- A User Access Token to instantiate the call composite. Learn how to [create and manage user access tokens](./../access-tokens.md).
+
+## Setting up
+
+UI Framework requires a React environment to be setup. Next we will do that. If you already have a React App, you can skip this section.
+
+### Set Up React App
+
+We will use the create-react-app template for this quickstart. For more information, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html)
+
+```console
+
+npx create-react-app my-app
+
+cd my-app
+
+```
+
+At the end of this process, you should have a full application inside of the folder `my-app`. For this quickstart, we'll be modifying files inside of the `src` folder.
+
+### Install the package
+
+Use the `npm install` command to install the Azure Communication Services Calling client library for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
+
+```console
+
+//Private Preview install tarball
+
+npm install --save ./{path for tarball}
+
+```
+
+The `--save` option lists the library as a dependency in your **package.json** file.
+
+### Run Create React App
+
+Let's test the Create React App installation by running:
+
+```console
+
+npm run start
+
+```
+
+## Object model
+
+The following classes and interfaces handle some of the major features of the Azure Communication Services UI client library:
+
+| Name | Description |
+| - | |
+| GroupCall | Composite component that renders a group calling experience with participant gallery and controls. |
+| GroupChat | Composite component that renders a group chat experience with chat thread and input |
++
+## Initialize Group Call and Group Chat Composite Components
+
+Go to the `src` folder inside of `my-app` and look for the file `app.js`. Here we'll drop the following code to initialize our Composite Components for Group Chat and Calling. You can choose which one to use depending on the type of communication experience you're building. If needed, you can use both at the same time. To initialize the components, you'll need an access token retrieved from Azure Communication Services. For details on how to do get access tokens, see: [create and manage user access tokens](./../access-tokens.md).
+
+> [!NOTE]
+> The components don't generate access tokens, group IDs, or thread IDs. These elements come from services that go through the proper steps to generate these IDs and pass them to the client application. For more information, see: [Client Server Architecture](./../../concepts/client-and-server-architecture.md).
+>
+> For Example: The Group Chat composite expects that the `userId` associated to the `token` being used to initialize it has already been joined to the `threadId` being provided. If the token hasn't been joined to the thread ID, then the Group Chat composite will fail. For more information on chat, see: [Getting Started with Chat](./../chat/get-started.md)
++
+`App.js`
+```javascript
+
+import {GroupCall, GroupChat} from "@azure/acs-ui-sdk"
+
+function App(){
+
+ return(<>
+ {/* Example styling provided, developers can provide their own styling to position and resize components */}
+ <div style={{height: "35rem", width: "50rem", float: "left"}}>
+ <GroupCall
+ displayName={DISPLAY_NAME} //Required, Display name for the user entering the call
+ token={TOKEN} // Required, Azure Communication Services access token retrieved from authentication service
+ refreshTokenCallback={CALLBACK} //Optional, Callback to refresh the token in case it expires
+ groupId={GROUPID} //Required, Id for group call that will be joined. (GUID)
+ onEndCall = { () => {
+ //Optional, Action to be performed when the call ends
+ }}
+ />
+ </div>
+
+ {/*Note: Make sure that the userId associated to the token has been added to the provided threadId*/}
+ {/* Example styling provided, developers can provide their own styling to position and resize components */}
+ <div style={{height: "35rem", width: "30rem", float: "left"}}>
+ <GroupChat
+ displayName={DISPLAY_NAME} //Required, Display name for the user entering the call
+ token={TOKEN} // Required, Azure Communication Services access token retrieved from authentication service
+ threadId={THREADID} //Required, Id for group chat thread that will be joined.
+ endpointUrl={ENDPOINT_URL} //Required, URL for Azure endpoint being used for Azure Communication Services
+ onRenderAvatar = { (acsId) => {
+ //Optional, function to override the avatar image on the chat thread. Function receives one parameters for the Azure Communication Services Identity. Must return a React element.
+ }}
+ refreshToken = { () => {
+ //Optional, function to refresh the access token in case it expires
+ }}
+ options = {{
+ //Optional, options to define chat behavior
+ sendBoxMaxLength: number | undefined //Optional, Limit the max send box length based on viewport size change.
+ }}
+ />
+ </div>
+ </>);
+}
+
+export default App;
+
+```
+
+## Run quickstart
+
+To run the code above, use the command:
+
+```console
+
+npm run start
+
+```
+
+To fully test the capabilities, you will need a second client with calling and chat functionality to join the call and chat thread. See our [Calling Hero Sample](./../../samples/calling-hero-sample.md) and [Chat Hero Sample](./../../samples/chat-hero-sample.md) as potential options.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try UI Framework Base Components](./get-started-with-components.md)
+
+For more information, see the following resources:
+- [UI Framework Overview](../../concepts/ui-framework/ui-sdk-overview.md)
+- [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/includes/pstn-call-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/includes/pstn-call-js.md
@@ -62,9 +62,9 @@ const hangUpPhoneButton = document.getElementById("hang-up-phone-button");
async function init() { const callClient = new CallClient();
- const tokenCredential = new AzureCommunicationUserCredential('your-token-here');
+ const tokenCredential = new AzureCommunicationTokenCredential('<USER ACCESS TOKEN with PSTN scope>');
callAgent = await callClient.createCallAgent(tokenCredential);
- // callButton.disabled = false;
+ // callPhoneButton.disabled = false;
} init();
container-registry https://docs.microsoft.com/en-us/azure/container-registry/allow-access-trusted-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/allow-access-trusted-services.md
@@ -0,0 +1,104 @@
+
+ Title: Access network-restricted registry using trusted Azure service
+description: Enable a trusted Azure service instance to securely access a network-restricted container registry to pull or push images
+ Last updated : 01/29/2021++
+# Allow trusted services to securely access a network-restricted container registry (preview)
+
+Azure Container Registry can allow select trusted Azure services to access a registry that's configured with network access rules. When trusted services are allowed, a trusted service instance can securely bypass the registry's network rules and perform operations such as pull or push images. The service instance's managed identity is used for access, and must be assigned an Azure role and authenticate with the registry.
+
+Use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.18 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+Allowing registry access by trusted Azure services is a **preview** feature.
+
+## Limitations
+
+* You must use a system-assigned managed identity enabled in a [trusted service](#trusted-services) to access a network-restricted container registry. User-assigned managed identities aren't currently supported.
+* Allowing trusted services doesn't apply to a container registry configured with a [service endpoint](container-registry-vnet.md). The feature only affects registries that are restricted with a [private endpoint](container-registry-private-link.md) or that have [public IP access rules](container-registry-access-selected-networks.md) applied.
+
+## About trusted services
+
+Azure Container Registry has a layered security model, supporting multiple network configurations that restrict access to a registry, including:
+
+* [Private endpoint with Azure Private Link](container-registry-private-link.md). When configured, a registry's private endpoint is accessible only to resources within the virtual network, using private IP addresses.
+* [Registry firewall rules](container-registry-access-selected-networks.md), which allow access to the registry's public endpoint only from specific public IP addresses or address ranges. You can also configure the firewall to block all access to the public endpoint when using private endpoints.
+
+When deployed in a virtual network or configured with firewall rules, a registry denies access by default to users or services from outside those sources.
+
+Several multi-tenant Azure services operate from networks that can't be included in these registry network settings, preventing them from pulling or pushing images to the registry. By designating certain service instances as "trusted", a registry owner can allow select Azure resources to securely bypass the registry's network settings to pull or push images.
+
+### Trusted services
+
+Instances of the following services can access a network-restricted container registry if the registry's **allow trusted services** setting is enabled (the default). More services will be added over time.
+
+|Trusted service |Supported usage scenarios |
+|||
+|ACR Tasks | [Access a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) |
+|Machine Learning | [Deploy](../machine-learning/how-to-deploy-custom-docker-image.md) or [train](../machine-learning/how-to-train-with-custom-image.md) a model in a Machine Learning workspace using a custom Docker container image |
+|Azure Container Registry | [Import images from another Azure container registry](container-registry-import-images.md#import-from-an-azure-container-registry-in-the-same-ad-tenant) |
+
+> [!NOTE]
+> Curently, enabling the allow trusted services setting does not allow instances of other managed Azure services including App Service, Azure Container Instances, and Azure Security Center to access a network-restricted container registry.
+
+## Allow trusted services - CLI
+
+By default, the allow trusted services setting is enabled in a new Azure container registry. Disable or enable the setting by running the [az acr update](/cli/azure/acr#az-acr-update) command.
+
+To disable:
+
+```azurecli
+az acr update --name myregistry --allow-trusted-services false
+```
+
+To enable the setting in an existing registry or a registry where it's already disabled:
+
+```azurecli
+az acr update --name myregistry --allow-trusted-services true
+```
+
+## Allow trusted services - portal
+
+By default, the allow trusted services setting is enabled in a new Azure container registry.
+
+To disable or re-enable the setting in the portal:
+
+1. In the portal, navigate to your container registry.
+1. Under **Settings**, select **Networking**.
+1. In **Allow public network access**, select **Selected networks** or **Disabled**.
+1. Do one of the following:
+ * To disable access by trusted services, under **Firewall exception**, uncheck **Allow trusted Microsoft services to access this container registry**.
+ * To allow trusted services, under **Firewall exception**, check **Allow trusted Microsoft services to access this container registry**.
+1. Select **Save**.
+
+## Trusted services workflow
+
+Here's a typical workflow to enable an instance of a trusted service to access a network-restricted container registry.
+
+1. Enable a system-assigned [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) in an instance of one of the [trusted services](#trusted-services) for Azure Container Registry.
+1. Assign the identity an [Azure role](container-registry-roles.md) to your registry. For example, assign the ACRPull role to pull container images.
+1. In the network-restricted registry, configure the setting to allow access by trusted services.
+1. Use the identity's credentials to authenticate with the network-restricted registry.
+1. Pull images from the registry, or perform other operations allowed by the role.
+
+### Example: ACR Tasks
+
+The following example demonstrates using ACR Tasks as a trusted service. See [Cross-registry authentication in an ACR task using an Azure-managed identity](container-registry-tasks-cross-registry-authentication.md) for task details.
+
+1. Create or update an Azure container registry, and [push a sample base image](container-registry-tasks-cross-registry-authentication.md#prepare-base-registry) to the registry. This registry is the *base registry* for the scenario.
+1. In a second Azure container registry, [define](container-registry-tasks-cross-registry-authentication.md#define-task-steps-in-yaml-file) and [create](container-registry-tasks-cross-registry-authentication.md#option-2-create-task-with-system-assigned-identity) an ACR task to pull an image from the base registry. Enable a system-assigned managed identity when creating the task.
+1. Assign the task identity [an Azure role to access the base registry](container-registry-tasks-authentication-managed-identity.md#3-grant-the-identity-permissions-to-access-other-azure-resources). For example, assign the AcrPull role, which has permissions to pull images.
+1. [Add managed identity credentials](container-registry-tasks-authentication-managed-identity.md#4-optional-add-credentials-to-the-task) to the task.
+1. To confirm that the task bypasses network restrictions, [disable public access](container-registry-access-selected-networks.md#disable-public-network-access) in the base registry.
+1. Run the task. If the base registry and task are configured properly, the task runs successfully, because the base registry allows access.
+
+To test disabling access by trusted
+
+1. In the base registry, disable the setting to allow access by trusted services.
+1. Run the task again. In this case, the task run fails, because the base registry no longer allows access by the task.
+
+## Next steps
+
+* To restrict access to a registry using a private endpoint in a virtual network, see [Configure Azure Private Link for an Azure container registry](container-registry-private-link.md).
+* To set up registry firewall rules, see [Configure public IP network rules](container-registry-access-selected-networks.md).
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-import-images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-import-images.md
@@ -2,7 +2,7 @@
Title: Import container images description: Import container images to an Azure container registry by using Azure APIs, without needing to run Docker commands. Previously updated : 09/18/2020 Last updated : 01/15/2021 # Import container images to a container registry
@@ -31,6 +31,11 @@ To import container images, this article requires that you run the Azure CLI in
> If you need to distribute identical container images across multiple Azure regions, Azure Container Registry also supports [geo-replication](container-registry-geo-replication.md). By geo-replicating a registry (Premium service tier required), you can serve multiple regions with identical image and tag names from a single registry. >
+> [!IMPORTANT]
+> Changes to image import between two Azure container registries have been introduced as of January 2021:
+> * Import to or from a network-restricted Azure container registry requires the restricted registry to [**allow access by trusted services**](allow-access-trusted-services.md) to bypass the network. By default, the setting is enabled, allowing import. If the setting isn't enabled in a newly created registry with a private endpoint or with registry firewall rules, import will fail.
+> * In an existing network-restricted Azure container registry that is used as an import source or target, enabling this network security feature is optional but recommended.
+ ## Prerequisites If you don't already have an Azure container registry, create a registry. For steps, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md).
@@ -88,6 +93,8 @@ You can import an image from an Azure container registry in the same AD tenant u
* [Public access](container-registry-access-selected-networks.md#disable-public-network-access) to the source registry may be disabled. If public access is disabled, specify the source registry by resource ID instead of by registry login server name.
+* If the source registry and/or the target registry has a private endpoint or registry firewall rules are applied, ensure that the restricted registry [allows trusted services](allow-access-trusted-services.md) to access the network.
+ ### Import from a registry in the same subscription For example, import the `aci-helloworld:latest` image from a source registry *mysourceregistry* to *myregistry* in the same Azure subscription.
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-troubleshoot-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-access.md
@@ -100,20 +100,20 @@ Related links:
### Configure service access
-Currently, Azure Security Center can't perform [image vulnerability scanning](../security-center/defender-for-container-registries-introduction.md?bc=%2fazure%2fcontainer-registry%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcontainer-registry%2ftoc.json) in a registry that restricts access to private endpoints, selected subnets, or IP addresses. Also, resources of the following services are unable to access a container registry with network restrictions:
+Currently, access to a container registry with network restrictions isn't allowed from several Azure
-* Azure DevOps Services
-* Azure Container Instances
-* Azure Container Registry Tasks
+* Azure Security Center can't perform [image vulnerability scanning](../security-center/defender-for-container-registries-introduction.md?bc=%2fazure%2fcontainer-registry%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcontainer-registry%2ftoc.json) in a registry that restricts access to private endpoints, selected subnets, or IP addresses.
+* Resources of certain Azure services are unable to access a container registry with network restrictions, including Azure App Service and Azure Container Instances.
If access or integration of these Azure services with your container registry is required, remove the network restriction. For example, remove the registry's private endpoints, or remove or modify the registry's public access rules.
+Starting January 2021, you can configure a network-restricted registry to [allow access](allow-access-trusted-services.md) from select trusted services.
+ Related links: * [Azure Container Registry image scanning by Security Center](../security-center/defender-for-container-registries-introduction.md) * Provide [feedback](https://feedback.azure.com/forums/347535-azure-security-center/suggestions/41091577-enable-vulnerability-scanning-for-images-that-are)
-* [Configure public IP network rules](container-registry-access-selected-networks.md)
-* [Connect privately to an Azure container registry using Azure Private Link](container-registry-private-link.md)
+* [Allow trusted services to securely access a network-restricted container registry](allow-access-trusted-services.md)
## Advanced troubleshooting
@@ -135,5 +135,5 @@ If you don't resolve your problem here, see the following options.
* [Troubleshoot registry login](container-registry-troubleshoot-login.md) * [Troubleshoot registry performance](container-registry-troubleshoot-performance.md) * [Community support](https://azure.microsoft.com/support/community/) options
-* [Microsoft Q&A](/answers/products/)
-* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/)
+* [Microsoft Q&A](https://docs.microsoft.com/answers/products/)
+* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/configure-periodic-backup-restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-periodic-backup-restore.md
@@ -0,0 +1,146 @@
+
+ Title: Configure Azure Cosmos DB account with periodic backup
+description: This article describes how to configure Azure Cosmos DB accounts with periodic backup with backup interval. and retention. Also how to contacts support to restore your data.
+++ Last updated : 10/13/2020+++++
+# Configure Azure Cosmos DB account with periodic backup
+
+Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters. The following steps show how Azure Cosmos DB performs data backup:
+
+* Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given container or database for 30 days.
+
+* Azure Cosmos DB stores these backups in Azure Blob storage whereas the actual data resides locally within Azure Cosmos DB.
+
+* To guarantee low latency, the snapshot of your backup is stored in Azure Blob storage in the same region as the current write region (or **one** of the write regions, in case you have a multi-region write configuration). For resiliency against regional disaster, each snapshot of the backup data in Azure Blob storage is again replicated to another region through geo-redundant storage (GRS). The region to which the backup is replicated is based on your source region and the regional pair associated with the source region. To learn more, see the [list of geo-redundant pairs of Azure regions](../best-practices-availability-paired-regions.md) article. You cannot access this backup directly. Azure Cosmos DB team will restore your backup when you request through a support request.
+
+ The following image shows how an Azure Cosmos container with all the three primary physical partitions in West US is backed up in a remote Azure Blob Storage account in West US and then replicated to East US:
+
+ :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="Periodic full backups of all Cosmos DB entities in GRS Azure Storage." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false":::
+
+* The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database.
+
+## <a id="configure-backup-interval-retention"></a>Modify the backup interval and retention period
+
+Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos account creation or after the account is created. The backup configuration is set at the Azure Cosmos account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. Currently you can change them backup options from Azure portal only.
+
+If you have accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
+
+Use the following steps to change the default backup options for an existing Azure Cosmos account:
+
+1. Sign into the [Azure portal.](https://portal.azure.com/)
+1. Navigate to your Azure Cosmos account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required.
+
+ * **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a non-zero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval cannot be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken.
+
+ * **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours.
+
+ * **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There is an extra charge if you need more than two copies. See the Consumed Storage section in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies.
+
+ :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-interval-retention.png" alt-text="Configure backup interval and retention for an existing Azure Cosmos account." border="true":::
+
+If you configure backup options during the account creation, you can configure the **Backup policy**, which is either **Periodic** or **Continuous**. The periodic policy allows you to configure the Backup interval and Backup retention. The continuous policy is currently available by sign-up only. The Azure Cosmos DB team will assess your workload and approve your request.
++
+## <a id="request-restore"></a>Request data restore from a backup
+
+If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those. Azure support is not available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page.
+
+To restore a specific snapshot of the backup, Azure Cosmos DB requires that the data is available during the backup cycle for that snapshot.
+You should have the following details before requesting a restore:
+
+* Have your subscription ID ready.
+
+* Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It is advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases.
+
+* If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.
+
+* If one or more databases are deleted, you should provide the Azure Cosmos account, and the Azure Cosmos database names and specify if a new database with the same name exists.
+
+* If one or more containers are deleted, you should provide the Azure Cosmos account name, database names, and the container names. And specify if a container with the same name exists.
+
+* If you have accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](#configure-backup-interval-retention) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team will have enough time to restore your account.
+
+In addition to Azure Cosmos account name, database names, container names, you should specify the point in time to which the data can be restored to. It is important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
+
+The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide other details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request.
++
+## Considerations for restoring the data from a backup
+
+You may accidentally delete or modify your data in one of the following scenarios:
+
+* Delete the entire Azure Cosmos account.
+
+* Delete one or more Azure Cosmos databases.
+
+* Delete one or more Azure Cosmos containers.
+
+* Delete or modify the Azure Cosmos items (for example, documents) within a container. This specific case is typically referred to as data corruption.
+
+* A shared offer database or containers within a shared offer database are deleted or corrupted.
+
+Azure Cosmos DB can restore data in all the above scenarios. When restoring, a new Azure Cosmos account is created to hold the restored data. The name of the new account, if it's not specified, will have the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a pre-created Azure Cosmos account.
+
+When you accidentally delete an Azure Cosmos account, we can restore the data into a new account with the same name, if the account name is not in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
+
+When you accidentally delete an Azure Cosmos database, we can restore the whole database or a subset of the containers within that database. It is also possible to select specific containers across databases and restore them to a new Azure Cosmos account.
+
+When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there is data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption.
+
+If you have accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team will have enough time to restore your account.
+
+> [!NOTE]
+> After you restore the data, not all the source capabilities or settings are carried over to the restored account. The following settings are not carried over to the new account:
+> * VNET access control lists
+> * Stored procedures, triggers and user-defined functions
+> * Multi-region settings
+
+If you provision throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore.
+
+## Required permissions to change retention or restore from the portal
+Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner, or contributor are allowed to request a restore or change the retention period.
+
+## Understanding Costs of extra backups
+Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/en-us/pricing/details/cosmos-db/). For example if Backup Retention is configured to 240 hrs that is, 10 days and Backup Interval to 24 hrs. This implies 10 copies of the backup data. Assuming 1 TB of data in West US 2, the would be 1000 * 0.12 ~ $ 120 for backup storage in given month.
++
+## Options to manage your own backups
+
+With Azure Cosmos DB SQL API accounts, you can also maintain your own backups by using one of the following approaches:
+
+* Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage of your choice.
+
+* Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage.
+
+## Post-restore actions
+
+The primary goal of the data restore is to recover the data that you have accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it is possible to use the restored account as your new active account, it's not a recommended option if you have production workloads.
+
+After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account will have the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a coadmin can see the restored account.
+
+### Migrate data to the original account
+
+The following are different ways to migrate data back to the original account:
+
+* Use the [Azure Cosmos DB data migration tool](import-data.md).
+* Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).
+* Use the [change feed](change-feed.md) in Azure Cosmos DB.
+* You can write your own custom code.
+
+It is advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they will incur cost for request units, storage, and egress.
+
+## Next steps
+
+* To make a restore request, contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+* Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-command-line https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-command-line.md
@@ -0,0 +1,299 @@
+
+ Title: Use Azure CLI to configure continuous backup and point in time restore in Azure Cosmos DB.
+description: Learn how to provision an account with continuous backup and restore data using Azure CLI.
+++ Last updated : 02/01/2021+++++
+# Configure and manage continuous backup and point in time restore (Preview) - using Azure CLI
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Cosmos DB's point-in-time restore feature(Preview) helps you to recover from an accidental change within a container, to restore a deleted account, database, or a container or to restore into any region (where backups existed). The continuous backup mode allows you to do restore to any point of time within the last 30 days.
+
+This article describes how to provision an account with continuous backup and restore data using Azure CLI.
+
+## <a id="install"></a>Install Azure CLI
+
+1. Install the latest version of Azure CLI
+
+ * Install the latest version of [Azure CLI](/cli/azure/install-azure-cli) or version higher than 2.17.1.
+ * If you have already installed CLI, run `az upgrade` command to update to the latest version. This command will only work with CLI version higher than 2.11. If you have an earlier version, use the above link to install the latest version.
+
+1. Install the `cosmosdb-preview` CLI extension.
+
+ * The point-in-time restore commands are available under `cosmosdb-preview` extension.
+ * You can install this extension by running the following command:
+ `az extension add --name cosmosdb-preview`
+ * You can uninstall this extension by running the following command:
+ `az extension remove --name cosmosdb-preview`
+
+1. Sign in and select your subscription
+
+ * Sign into your Azure account with `az login` command.
+ * Select the required subscription using `az account set -s <subscriptionguid>` command.
+
+## <a id="provision-sql-api"></a>Provision a SQL API account with continuous backup
+
+To provision a SQL API account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named `pitracct2` with continuous backup policy created in the "West US" region under "myrg" resource group:
+
+```azurecli-interactive
+
+az cosmosdb create \
+ --name pitracct2 \
+ --resource-group myrg \
+ --backup-policy-type Continuous \
+ --default-consistency-level Session \
+ --locations regionName="West US"
+
+```
+
+## <a id="provision-mongo-api"></a>Provision an Azure Cosmos DB API for MongoDB account with continuous backup
+
+The following command shows an example of a single region write account named `pitracct3` with continuous backup policy created the "West US" region under "myrg" resource group:
+
+```azurecli-interactive
+
+az cosmosdb create \
+ --name pitracct3 \
+ --kind MongoDB \
+ --resource-group myrg \
+ --server-version "3.6" \
+ --backup-policy-type Continuous \
+ --default-consistency-level Session \
+ --locations regionName="West US"
+
+```
+
+## <a id="trigger-restore"></a>Trigger a restore operation with CLI
+
+The simplest way to trigger a restore is by issuing the restore command with name of the target account, source account, location, resource group, timestamp (in UTC), and optionally the database and container names. The following are some examples to trigger the restore operation:
+
+1. Create a new Azure Cosmos DB account by restoring from an existing account.
+
+ ```azurecli-interactive
+
+ az cosmosdb restore \
+ --target-database-account-name MyRestoredCosmosDBDatabaseAccount \
+ --account-name MySourceAccount \
+ --restore-timestamp 2020-07-13T16:03:41+0000 \
+ --resource-group MyResourceGroup \
+ --location "West US"
+
+ ```
+
+2. Create a new Azure Cosmos DB account by restoring only selected databases and containers from an existing database account.
+
+ ```azurecli-interactive
+
+ az cosmosdb restore \
+ --resource-group MyResourceGroup \
+ --target-database-account-name MyRestoredCosmosDBDatabaseAccount \
+ --account-name MySourceAccount \
+ --restore-timestamp 2020-07-13T16:03:41+0000 \
+ --location "West US" \
+ --databases-to-restore name=MyDB1 collections=collection1 collection2 \
+ --databases-to-restore name=MyDB2 collections=collection3 collection4
+
+ ```
+
+## <a id="enumerate-sql-api"></a>Enumerate restorable resources for SQL API
+
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources.
+
+**List all the accounts that can be restored in the current subscription**
+
+Run the following CLI command to list all the accounts that can be restored in the current subscription
+
+```azurecli-interactive
+az cosmosdb restorable-database-account list --account-name "pitrbb"
+```
+
+The response includes all the database accounts (both live and deleted) that can be restored and the regions that they can be restored from:
+
+```json
+{
+ "accountName": "pitrbb",
+ "apiType": "Sql",
+ "creationTime": "2021-01-08T23:34:11.095870+00:00",
+ "deletionTime": null,
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/7133a59a-d1c0-4645-a699-6e296d6ac865",
+ "identity": null,
+ "location": "West US",
+ "name": "7133a59a-d1c0-4645-a699-6e296d6ac865",
+ "restorableLocations": [
+ {
+ "creationTime": "2021-01-08T23:34:11.095870+00:00",
+ "deletionTime": null,
+ "locationName": "West US",
+ "regionalDatabaseAccountInstanceId": "f02df26b-c0ec-4829-8bef-3482d36e6230"
+ }
+ ],
+ "tags": null,
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts"
+ }
+```
+
+Just like the "CreationTime" or "DeletionTime" for the account, there is a "CreationTime" or "DeletionTime" for the region too. These times allow you to choose the right region and a valid time range to restore into that region.
+
+**List all the versions of databases in a live database account**
+
+Listing all the versions of databases allows you to choose the right database in a scenario where the actual time of existence of database is unknown.
+
+Run the following CLI command to list all the versions of databases. This command only works with live accounts. The "instanceId" and the "location" parameters are obtained from the "name" and "location" properties in the response of `az cosmosdb restorable-database-account list` command. The instanceId attribute is also a property of source database account that is being restored:
+
+```azurecli-interactive
+az cosmosdb sql restorable-database list \
+ --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --location "West US"
+```
+
+This command output now shows when a database was created and deleted.
+
+```json
+[
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/7133a59a-d1c0-4645-a699-6e296d6ac865/restorableSqlDatabases/40e93dbd-2abe-4356-a31a-35567b777220",
+ ..
+ "name": "40e93dbd-2abe-4356-a31a-35567b777220",
+ "resource": {
+ "database": {
+ "id": "db1"
+ },
+ "eventTimestamp": "2021-01-08T23:27:25Z",
+ "operationType": "Create",
+ "ownerId": "db1",
+ "ownerResourceId": "YuZAAA=="
+ },
+ ..
+ },
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/7133a59a-d1c0-4645-a699-6e296d6ac865/restorableSqlDatabases/243c38cb-5c41-4931-8cfb-5948881a40ea",
+ ..
+ "name": "243c38cb-5c41-4931-8cfb-5948881a40ea",
+ "resource": {
+ "database": {
+ "id": "spdb1"
+ },
+ "eventTimestamp": "2021-01-08T23:25:25Z",
+ "operationType": "Create",
+ "ownerId": "spdb1",
+ "ownerResourceId": "OIQ1AA=="
+ },
+ ..
+ }
+]
+```
+
+**List all the versions of SQL containers of a database in a live database account**
+
+Use the following command to list all the versions of SQL containers. This command only works with live accounts. The "databaseRid" parameter is the "ResourceId" of the database you want to restore. It is the value of "ownerResourceid" attribute found in the response of `az cosmosdb sql restorable-database list` command.
+
+```azurecli-interactive
+az cosmosdb sql restorable-container list \
+ --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --database-rid "OIQ1AA==" \
+ --location "West US"
+```
+
+This command output shows includes list of operations performed on all the containers inside this database:
+
+```json
+[
+ {
+ ...
+
+ "eventTimestamp": "2021-01-08T23:25:29Z",
+ "operationType": "Replace",
+ "ownerId": "procol3",
+ "ownerResourceId": "OIQ1APZ7U18="
+...
+ },
+ {
+ ...
+ "eventTimestamp": "2021-01-08T23:25:26Z",
+ "operationType": "Create",
+ "ownerId": "procol3",
+ "ownerResourceId": "OIQ1APZ7U18="
+ },
+]
+```
+
+**Find databases or containers that can be restored at any given timestamp**
+
+Use the following command to get the list of databases or containers that can be restored at any given timestamp. This command only works with live accounts.
+
+```azurecli-interactive
+
+az cosmosdb sql restorable-resource list \
+ --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --location "West US" \
+ --restore-location "West US" \
+ --restore-timestamp "2021-01-10 T01:00:00+0000"
+
+```
+
+```json
+[
+ {
+ "collectionNames": [
+ "procol1",
+ "procol2"
+ ],
+ "databaseName": "db1"
+ },
+ {
+ "collectionNames": [
+ "procol3",
+ "spcol1"
+ ],
+ "databaseName": "spdb1"
+ }
+]
+```
+
+## <a id="enumerate-mongodb-api"></a>Enumerate restorable resources for MongoDB API account
+
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. Like with SQL API, you can use the `az cosmosdb` command but with "mongodb" as parameter instead of "sql". These commands only work for live accounts.
+
+**List all the versions of mongodb databases in a live database account**
+
+```azurecli-interactive
+az cosmosdb mongodb restorable-database list \
+ --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --location "West US"
+```
+
+**List all the versions of mongodb collections of a database in a live database account**
+
+```azurecli-interactive
+az cosmosdb mongodb restorable-collection list \
+ --instance-id "<InstanceID>" \
+ --database-rid "AoQ13r==" \
+ --location "West US"
+```
+
+**List all the resources of a mongodb database account that are available to restore at a given timestamp and region**
+
+```azurecli-interactive
+az cosmosdb mongodb restorable-resource list \
+ --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --location "West US" \
+ --restore-location "West US" \
+ --restore-timestamp "2020-07-20T16:09:53+0000"
+```
+
+## Next steps
+
+* Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-frequently-asked-questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-frequently-asked-questions.md
@@ -0,0 +1,83 @@
+
+ Title: Frequently asked questions about Azure Cosmos DB point-in-time restore feature.
+description: This article lists frequently asked questions about the Azure Cosmos DB point-in-time restore feature that is achieved by using the continuous backup mode.
+++ Last updated : 02/01/2021+++++
+# Frequently asked questions on the Azure Cosmos DB point-in-time restore feature (Preview)
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article lists frequently asked questions about the Azure Cosmos DB point-in-time restore functionality(Preview) that is achieved by using the continuous backup mode.
+
+## How much time does it takes to restore?
+The restore duration dependents on the size of your data.
+
+### Can I submit the restore time in local time?
+The restore may not happen depending on whether the key resources like databases or containers existed at that time. You can verify by entering the time and looking at the selected database or container for a given time. If you see no resources exist to restore, then the restore process doesn't work.
+
+### How can I track if an account is being restored?
+After you submit the restore command, and wait on the same page, after the operation is complete, the status bar shows successfully restored account message. You can also search for the restored account and [track the status of account being restored](continuous-backup-restore-portal.md#track-restore-status). While restore is in progress, the status of the account will be "Creating", after the restore operation completes, the account status will change to "Online".
+
+Similarly for PowerShell and CLI, you can track the progress of restore operation by executing `az cosmosdb show` command as follows:
+
+```azurecli-interactive
+az cosmosdb show --name "accountName" --resource-group "resourceGroup"
+```
+
+The provisioningState shows "Succeeded" when the account is online.
+
+```json
+{
+"virtualNetworkRules": [],
+"writeLocations" : [
+{
+ "documentEndpoint": "https://<accountname>.documents.azure.com:443/",
+ "failoverpriority": 0,
+ "id": "accountName" ,
+ "isZoneRedundant" : false,
+ "locationName": "West US 2",
+ "provisioningState": "Succeeded"
+}
+]
+}
+```
+
+### How can I find out whether an account was restored from another account?
+Run the `az cosmosdb show` command, in the output, you can see that the value of `createMode` property. If the value is set to **Restore**. it indicates that the account was restored from another account. The `restoreParameters` property has further details such as `restoreSource`, which has the source account ID. The last GUID in the `restoreSource` parameter is the instanceId of the source account.
+
+For example, in the following output, the source account's instance ID is "7b4bb-f6a0-430e-ade1-638d781830cc"
+
+```json
+"restoreParameters": {
+ "databasesToRestore" : [],
+ "restoreMode": "PointInTime",
+ "restoreSource": "/subscriptions/2a5b-f6a0-430e-ade1-638d781830dd/providers/Microsoft.DocumentDB/locations/westus/restorableDatabaseAccounts/7b4bb-f6a0-430e-ade1-638d781830cc",
+ "restoreTimestampInUtc": "2020-06-11T22:05:09Z"
+}
+```
+
+### What happens when I restore a shared throughput database or a container within a shared throughput database?
+The entire shared throughput database is restored. You cannot choose a subset of containers in a shared throughput database for restore.
+
+### What is the use of InstanceID in the account definition?
+At any given point in time, Azure Cosmos DB account's "accountName" property is globally unique while it is alive. However, after the account is deleted, it is possible to create another account with the same name and hence the "accountName" is no longer enough to identify an instance of an account.
+
+ID or the "instanceId" is a property of an instance of an account and it is used to disambiguate across multiple accounts (live and deleted) if they have same name for restore. You can get the instance ID by running the `Get-AzCosmosDBRestorableDatabaseAccount` or `az cosmosdb restorable-database-account` commands. The name attribute value denotes the "InstanceID".
+
+## Next steps
+
+* What is [continuous backup](continuous-backup-restore-introduction.md) mode?
+* Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-introduction.md
@@ -0,0 +1,137 @@
+
+ Title: Continuous backup with point in time restore feature in Azure Cosmos DB
+description: Azure Cosmos DB's point-in-time restore feature helps to recover data from an accidental write, delete operation, or to restore data into any region. Learn about pricing and current limitations.
+++ Last updated : 02/01/2021++++++
+# Continuous backup with point-in-time restore (Preview) feature in Azure Cosmos DB
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Cosmos DB's point-in-time restore feature(Preview) helps in multiple scenarios such as the following:
+
+* To recover from an accidental write or delete operation within a container.
+* To restore a deleted account, database, or a container.
+* To restore into any region (where backups existed) at the restore point in time.
+
+Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. Continuous backups are taken in every region where the account exists. The following image shows how a container with write region in West US, read regions in East and East US 2 is backed up to a remote Azure Blob Storage account in the respective regions. By default, each region stores the backup in Locally Redundant storage accounts. If the region has [Availability zones](high-availability.md#availability-zone-support) enabled then the backup is stored in Zone-Redundant storage accounts.
++
+The available time window for restore (also known as retention period) is the lower value of the following two: "30 days back in past from now" or "up to the resource creation time". The point in time for restore can be any timestamp within the retention period.
+
+In public preview, you can restore the Azure Cosmos DB account for SQL API or MongoDB contents point in time to another account using [Azure portal](continuous-backup-restore-portal.md), [Azure Command Line Interface](continuous-backup-restore-command-line.md) (az CLI), [Azure PowerShell](continuous-backup-restore-powershell.md), or the [Azure Resource Manager](continuous-backup-restore-template.md).
+
+## What is restored?
+
+In a steady state, all mutations performed on the source account (which includes databases, containers, and items) are backed up asynchronously within 100 seconds. If the backup media (that is Azure storage) is down or unavailable, the mutations are persisted locally until the media is available back and then they are flushed out to prevent any loss in fidelity of operations that can be restored.
+
+You can choose to restore any combination of provisioned throughput containers, shared throughput database, or the entire account. The restore action restores all data and its index properties into a new account. The restore process ensures that all the data restored in an account, database, or a container is guaranteed to be consistent up to the restore time specified. The duration of restore will depend on the amount of data that needs to be restored.
+
+> [!NOTE]
+> With the continuous backup mode, the backups are taken in every region where your Azure Cosmos DB account is available. Backups taken for each region account are Locally redundant by default and Zone redundant if your account has [availability zone](high-availability.md#availability-zone-support) feature enabled for that region. The restore action always restores data into a new account.
+
+## What is not restored?
+
+The following configurations aren't restored after the point-in-time recovery:
+
+* Firewall, VNET, private endpoint settings.
+* Consistency settings. By default, the account is restored with session consistency. ΓÇâ
+* Regions.
+* Stored procedures, triggers, UDFs.
+
+You can add these configurations to the restored account after the restore is completed.
+
+## Restore scenarios
+
+The following are some of the key scenarios that are addressed by the point-in-time-restore feature. Scenarios [a] through [c] demonstrate how to trigger a restore if the restore timestamp is known beforehand.
+However, there could be scenarios where you don't know the exact time of accidental deletion or corruption. Scenarios [d] and [e] demonstrate how to _discover_ the restore timestamp using the new event feed APIs on the restorable database or containers.
++
+a. **Restore deleted account** - All the deleted accounts that you can restore are visible from the **Restore** pane. For example, if "Account A" is deleted at timestamp T3. In this case the timestamp just before T3, location, target account name, resource group, and target account name is sufficient to restore from [Azure portal](continuous-backup-restore-portal.md#restore-deleted-account), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore).
++
+b. **Restore data of an account in a particular region** - For example, if "Account A" exists in two regions "East US" and "West US" at timestamp T3. If you need a copy of account A in "West US", you can do a point in time restore from [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore) with West US as the target location.
+
+c. **Recover from an accidental write or delete operation within a container with a known restore timestamp** - For example, if you **know** that the contents of "Container 1" within "Database 1" were modified accidentally at timestamp T3. You can do a point in time restore from [Azure portal](continuous-backup-restore-portal.md#restore-live-account), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore) into another account at timestamp T3 to recover the desired state of container.
+
+d. **Restore an account to a previous point in time before the accidental delete of the database** - In the [Azure portal](continuous-backup-restore-portal.md#restore-live-account), you can use the event feed pane to determine when a database was deleted and find the restore time. Similarly, with [Azure CLI](continuous-backup-restore-command-line.md#trigger-restore) and [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), you can discover the database deletion event by enumerating the database events feed and then trigger the restore command with the required parameters.
+
+e. **Restore an account to a previous point in time before the accidental delete or modification of the container properties.** - In [Azure portal](continuous-backup-restore-portal.md#restore-live-account), you can use the event feed pane to determine when a container was created, modified, or deleted to find the restore time. Similarly, with [Azure CLI](continuous-backup-restore-command-line.md#trigger-restore) and [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), you can discover all the container events by enumerating the container events feed and then trigger the restore command with required parameters.
+
+## Permissions
+
+Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. To learn more, see the [Permissions](continuous-backup-restore-permissions.md) article.
+
+## <a id="continuous-backup-pricing"></a>Pricing
+
+Azure Cosmos DB accounts that have continuous backup enabled will incur an additional monthly charge to "store the backup" and to "restore your data". The restore cost is added every time the restore operation is initiated. If you configure an account with continuous backup but don't restore the data, only backup storage cost is included in your bill.
+
+The following example is based on the price for an Azure Cosmos account deployed in a non-government region in the US. The pricing and calculation can vary depending on the region you are using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
+
+* All accounts enabled with continuous backup policy incur an additional monthly charge for backup storage that is calculated as follows:
+
+ $0.20/GB * Data size in GB in account * Number of regions
+
+* Every restore API invocation incurs a one time charge. The charge is a function of the amount of data restore and it is calculated as follows:
+
+ $0.15/GB * Data size in GB.
+
+For example, if you have 1-TB of data in two regions then:
+
+* Backup storage cost is calculated as (1000 * 0.20 * 2) = $400 per month
+
+* Restore cost is calculated as (1000 * 0.15) = $150 per restore
+
+## Current limitations (public preview)
+
+Currently the point in time restore functionality is in public preview and it has the following limitations:
+
+* Only Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra, Table, and Gremlin APIs are not yet supported.
+
+* An existing account with default periodic backup policy cannot be converted to use continuous backup mode.
+
+* Azure sovereign and Azure Government cloud regions not yet supported.
+
+* Accounts with customer-managed keys are not supported to use continuous backup.
+
+* Multi-regions write accounts are not supported.
+
+* Accounts with Synapse Link enabled are not supported.
+
+* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account did not exist.
+
+* The restore window is only 30 days and it cannot be changed.
+
+* The backups are not automatically geo-disaster resistant. You have to explicitly add another region to have resiliency for the account and the backup.
+
+* While a restore is in progress, don't modify or delete the Identity and Access Management (IAM) policies that grant the permissions for the account or change any VNET, firewall configuration.
+
+* Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created are not supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb-custom-commands.md).
+
+* The point-in-time restore functionality always restores to a new Azure Cosmos account. Restoring to an existing account is currently not supported. If you are interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative or [UserVoice](https://feedback.azure.com/forums/263030-azure-cosmos-db).
+
+* All the new APIs exposed for listing `RestorableDatabaseAccount`, `RestorableSqlDatabases`, `RestorableSqlContainer`, `RestorableMongodbDatabase`, `RestorableMongodbCollection` are subject to changes while the feature is in preview.
+
+* After restoring, it is possible that for certain collections the consistent index may be rebuilding. You can check the status of the rebuild operation via the [IndexTransformationProgress](how-to-manage-indexing-policy.md) property.
+
+* The restore process restores all the properties of a container including its TTL configuration. As a result, it is possible that the data restored is deleted immediately if you configured that way. In order to prevent this situation, the restore timestamp must be before the TTL properties were added into the container.
+
+## Next steps
+
+* Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-permissions.md
@@ -0,0 +1,130 @@
+
+ Title: Configure permissions to restore an Azure Cosmos DB account.
+description: Learn how to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. It shows how to assign a built-in role using Azure portal, CLI, or define a custom role.
+++ Last updated : 02/01/2021++++
+# Manage permissions to restore an Azure Cosmos DB account
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup(Preview) account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope or more granularly at the source account scope as shown in the following image:
++
+Scope is a set of resources that have access, to learn more on scopes, see the [Azure RBAC](../role-based-access-control/scope-overview.md) documentation. In Azure Cosmos DB, applicable scopes are the source subscription and database account for most of the use cases. The principal performing the restore actions should have write permissions to the destination resource group.
+
+## Assign roles for restore using the Azure portal
+
+To perform a restore, a user or a principal need the permission to restore (that is "restore/action" permission), and permission to provision a new account (that is "write" permission). To grant these permissions, the owner can assign the "CosmosRestoreOperator" and "Cosmos DB Operator" built in roles to a principal.
+
+1. Sign into the [Azure portal](https://portal.azure.com/)
+
+1. Navigate to your subscription and go to **Access control (IAM)** tab and select **Add** > **Add role assignment**
+
+1. In the **Add role assignment** pane, for **Role** field, select **CosmosRestoreOperator** role. Choose **User, group, or a service principal** for the **Assign access to** field and search for a user's name or email ID as shown in the following image:
+
+ :::image type="content" source="./media/continuous-backup-restore-permissions/assign-restore-operator-roles.png" alt-text="Assign CosmosRestoreOperator and Cosmos DB Operator roles." border="true":::
+
+1. Select **Save** to grant the "restore/action permission".
+
+1. Repeat Step 3 with **Cosmos DB Operator** role to grant the write permission. When assigning this role from the Azure portal, it grants the restore permission to the whole subscription.
+
+## Permission scopes
+
+|Scope |Example |
+|||
+|Subscription | /subscriptions/00000000-0000-0000-0000-000000000000 |
+|Resource group | /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Example-cosmosdb-rg |
+|CosmosDB restorable account resource | /subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/23e99a35-cd36-4df4-9614-f767a03b9995|
+
+The restorable account resource can be extracted from the output of the `az cosmosdb restorable-database-account list --name <accountname>` command in CLI or `Get-AzCosmosDBRestorableDatabaseAccount -DatabaseAccountName <accountname>` cmdlet in PowerShell. The name attribute in the output represents the "instanceID" of the restorable account. To learn more, see the [PowerShell](continuous-backup-restore-powershell.md) or [CLI](continuous-backup-restore-command-line.md) article.
+
+## Permissions
+
+Following permissions are required to perform the different activities pertaining to restore for continuous backup mode accounts:
+
+|Permission |Impact |Minimum scope |Maximum scope |
+|||||
+|Microsoft.Resources/deployments/validate/action, Microsoft.Resources/deployments/write | These permissions are required for the ARM template deployment to create the restored account. See the sample permission [RestorableAction]() below for how to set this role. | Not applicable | Not applicable |
+|Microsoft.DocumentDB/databaseAccounts/write | This permission is required to restore an account into a resource group | Resource group under which the restored account is created. | Subscription under which the restored account is created |
+|Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action |This permission is required on the source restorable database account scope to allow restore actions to be performed on it. | The "RestorableDatabaseAccount" resource belonging to the source account being restored. This value is also given by the "ID" property of the restorable database account resource. An example of restorable account is `/subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>` | The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
+|Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read |This permission is required on the source restorable database account scope to list the database accounts that can be restored. | The "RestorableDatabaseAccount" resource belonging to the source account being restored. This value is also given by the "ID" property of the restorable database account resource. An example of restorable account is `/subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>`| The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
+|Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read | This permission is required on the source restorable account scope to allow reading of restorable resources such as list of databases and containers for a restorable account. | The "RestorableDatabaseAccount" resource belonging to the source account being restored. This value is also given by the "ID" property of the restorable database account resource. An example of restorable account is `/subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>`| The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
+
+## Azure CLI role assignment scenarios to restore at different scopes
+
+Roles with permission can be assigned to different scopes to achieve granular control on who can perform the restore operation within a subscription or a given account.
+
+### Assign capability to restore from any restorable account in a subscription
+
+Assign the `CosmosRestoreOperator` built-in role at subscription level
+
+```azurecli-interactive
+az role assignment create --role "CosmosRestoreOperator" --assignee <email> ΓÇôscope /subscriptions/<subscriptionId>
+```
+
+### Assign capability to restore from a specific account
+
+* Assign a user write action on the specific resource group. This action is required to create a new account in the resource group.
+
+* Assign the "CosmosRestoreOperator" built-in role to the specific restorable database account that needs to be restored. In the following command, the scope for the "RestorableDatabaseAccount" is retrieved from the "ID" property in the output of `az cosmosdb restorable-database-account` (if using CLI) or `Get-AzCosmosDBRestorableDatabaseAccount` (if using PowerShell).
+
+ ```azurecli-interactive
+ az role assignment create --role "CosmosRestoreOperator" --assignee <email> ΓÇôscope <RestorableDatabaseAccount>
+ ```
+
+### Assign capability to restore from any source account in a resource group.
+This operation is currently not supported.
+
+## Custom role creation for restore action with CLI
+
+The subscription owner can provide the permission to restore to any other Azure AD identity. The restore permission is based on the action: "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action", and it should be included in their restore permission. There is a built-in role called "CosmosRestoreOperator" that has this role included. You can either assign the permission using this built-in role or create a custom role.
+
+The RestorableAction below represents a custom role. You have to explicitly create this role. The following JSON template creates a custom role "RestorableAction" with restore permission:
+
+```json
+{
+ "assignableScopes": [
+ "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744"
+ ],
+ "description": "Can do a restore request for any Cosmos DB database account with continuous backup",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Resources/deployments/validate/action",
+ "Microsoft.DocumentDB/databaseAccounts/write",
+ "Microsoft.Resources/deployments/write",
+ "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action",
+ "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read",
+ "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read"
+ ],
+ "dataActions": [],
+ "notActions": [],
+ "notDataActions": []
+ }
+ ],
+ "Name": "RestorableAction",
+ "roleType": "CustomRole"
+}
+```
+
+Next use the following template deployment command to create a role with restore permission using ARM template:
+
+```azurecli-interactive
+az role definition create --role-definition <JSON_Role_Definition_Path>
+```
+
+## Next steps
+
+* Configure and manage continuous backup using the [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-portal.md
@@ -0,0 +1,102 @@
+
+ Title: Configure continuous backup and point in time restore using Azure portal in Azure Cosmos DB.
+description: Learn how to identify the restore point and configure continuous backup using Azure portal. It shows how to restore a live and deleted account.
+++ Last updated : 02/01/2021+++++
+# Configure and manage continuous backup and point in time restore (Preview) - using Azure portal
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Cosmos DB's point-in-time restore feature(Preview) helps you to recover from an accidental change within a container, to restore a deleted account, database, or a container or to restore into any region (where backups existed). The continuous backup mode allows you to do restore to any point of time within the last 30 days.
+
+This article describes how to identify the restore point and configure continuous backup using Azure portal.
+
+## <a id="provision"></a>Provision an account with continuous backup
+
+When creating a new Azure Cosmos DB account, for the **Backup policy** option, choose **continuous** mode to enable the point in time restore functionality for the new account. After this feature is enabled for the account, all the databases and containers are available for continuous backup. With the point-in-time restore, data is always restored to a new account, currently you can't restore to an existing account.
++
+## <a id="restore-live-account"></a>Restore a live account from accidental modification
+
+You can use Azure portal to restore a live account or selected databases and containers under it. Use the following steps to restore your data:
+
+1. Sign into the [Azure portal](https://portal.azure.com/)
+1. Navigate to your Azure Cosmos DB account and open the **Point In Time Restore** pane.
+
+ > [!NOTE]
+ > The restore pane in Azure portal is only populated if you have the `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission. To learn more about how to set this permission, see the [Backup and restore permissions](continuous-backup-restore-permissions.md) article.
+
+1. Fill the following details to restore:
+
+ * **Restore Point (UTC)** ΓÇô A timestamp within the last 30 days. The account should exist at that timestamp. You can specify the restore point in UTC. It can be as close to the second when you want to restore it. Select the **Click here** link to get help on [identifying the restore point](#event-feed).
+
+ * **Location** ΓÇô The destination region where the account is restored. The account should exist in this region at the given timestamp (eg. West US or East US). An account can be restored only to the regions in which the source account existed.
+
+ * **Restore Resource** ΓÇô You can either choose **Entire account** or a **selected database/container** to restore. The databases and containers should exist at the given timestamp. Based on the restore point and location selected, restore resources are populated, which allows user to select specific databases or containers that need to be restored.
+
+ * **Resource group** - Resource group under which the target account will be created and restored. The resource group must already exist.
+
+ * **Restore Target Account** ΓÇô The target account name. The target account name needs to follow same guidelines as when you are creating a new account. This account will be created by the restore process in the same region where your source account exists.
+
+ :::image type="content" source="./media/continuous-backup-restore-portal/restore-live-account-portal.png" alt-text="Restore a live account from accidental modification Azure portal." border="true":::
+
+1. After you select the above parameters, select the **Submit** button to kick off a restore. The restore cost is a one time charge, which is based on the amount of data and charges for the storage in given region. To learn more, see the [Pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing) section.
+
+## <a id="event-feed"></a>Use event feed to identify the restore time
+
+When filling out the restore point time in the Azure portal, if you need help with identifying restore point, select the **click here** link, it takes you to the event feed blade. The event feed provides a full fidelity list of create, replace, delete events on databases and containers of the source account.
+
+For example, if you want to restore to the point before a certain container was deleted or updated, check this event feed. Events are displayed in chronologically descending order of time when they occurred, with recent events at the top. You can browse through the results and select the time before or after the event to further narrow your time.
++
+> [!NOTE]
+> The event feed does not display the changes to the item resources. You can always manually specify any timestamp in the last 30 days (as long as account exists at that time) for restore.
+
+## <a id="restore-deleted-account"></a>Restore a deleted account
+
+You can use Azure portal to completely restore a deleted account within 30 days of its deletion. Use the following steps to restore a deleted account:
+
+1. Sign into the [Azure portal](https://portal.azure.com/)
+1. Search for "Azure Cosmos DB" resources in the global search bar. It lists all your existing accounts.
+1. Next select the **Restore** button. The Restore pane displays a list of deleted accounts that can be restored within the retention period, which is 30 days from deletion time.
+1. Choose the account that you want to restore.
+
+ :::image type="content" source="./media/continuous-backup-restore-portal/restore-deleted-account-portal.png" alt-text="Restore a deleted account from Azure portal." border="true":::
+
+ > [!NOTE]
+ > Note: The restore pane in Azure portal is only populated if you have the `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission. To learn more about how to set this permission, see the [Backup and restore permissions](continuous-backup-restore-permissions.md) article.
+
+1. Select an account to restore and input the following details to restore a deleted account:
+
+ * **Restore Point (UTC)** ΓÇô A timestamp within the last 30 days. The account should have existed at that timestamp. Specify the restore point in UTC. It can be as close to the second when you want to restore it.
+
+ * **Location** ΓÇô The destination region where the account needs to be restored. The source account should exist in this region at the given timestamp. Example West US or East US.
+
+ * **Resource group** - Resource group under which the target account will be created and restored. The resource group must already exist.
+
+ * **Restore Target Account** ΓÇô The target account name needs to follow same guidelines as when you are creating a new account. This account will be created by the restore process in the same region where your source account exists.
+
+## <a id="track-restore-status"></a>Track the status of restore operation
+
+After initiating a restore operation, select the **Notification** bell icon at top-right corner of portal. It gives a link displaying the status of the account being restored. While restore is in progress, the status of the account will be "Creating", after the restore operation completes, the account status will change to "Online".
++
+## Next steps
+
+* Configure and manage continuous backup using [Azure CLI](continuous-backup-restore-command-line.md), [PowerShell](continuous-backup-restore-powershell.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-powershell.md
@@ -0,0 +1,244 @@
+
+ Title: Use Azure PowerShell to configure continuous backup and point in time restore in Azure Cosmos DB.
+description: Learn how to provision an account with continuous backup and restore data using Azure PowerShell.
+++ Last updated : 02/01/2021+++++
+# Configure and manage continuous backup and point in time restore (Preview) - using Azure PowerShell
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Cosmos DB's point-in-time restore feature(Preview) helps you to recover from an accidental change within a container, to restore a deleted account, database, or a container or to restore into any region (where backups existed). The continuous backup mode allows you to do restore to any point of time within the last 30 days.
+
+This article describes how to provision an account with continuous backup and restore data using Azure PowerShell.
+
+## <a id="install-powershell"></a>Install Azure PowerShell
+
+1. Run the following command from Azure PowerShell to install the `Az.CosmosDB` preview module, which contains the commands related to point in time restore:
+
+ ```azurepowershell
+ Install-Module -Name Az.CosmosDB -AllowPrerelease
+ ```
+
+1. After installing the modules, log in to Azure using
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+1. Select a specific subscription with the following command:
+
+ ```azurepowershell
+ Select-AzSubscription -Subscription <SubscriptionName>
+ ```
+
+## <a id="provision-sql-api"></a>Provision a SQL API account with continuous backup
+
+To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
+
+The following cmdlet is an example of a single region write account `pitracct2` with continuous backup policy created in "West US" region under "myrg" resource group:
+
+```azurepowershell
+
+New-AzCosmosDBAccount `
+ -ResourceGroupName "myrg" `
+ -Location "West US" `
+ -BackupPolicyType Continuous `
+ -Name "pitracct2" `
+ -ApiKind "Sql"
+
+```
+
+## <a id="provision-mongodb-api"></a>Provision a MongoDB API account with continuous backup
+
+The following cmdlet is an example of continuous backup account "pitracct2" created in "West US" region under "myrg" resource group:
+
+```azurepowershell
+
+New-AzCosmosDBAccount `
+ -ResourceGroupName "myrg" `
+ -Location "West US" `
+ -BackupPolicyType Continuous `
+ -Name "pitracct3" `
+ -ApiKind "MongoDB" `
+ -ServerVersion "3.6"
+
+```
+
+## <a id="trigger-restore"></a>Trigger a restore operation
+
+The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, and timestamp:
+
+```azurepowershell
+
+Restore-AzCosmosDBAccount `
+ -TargetResourceGroupName <resourceGroupName> `
+ -TargetDatabaseAccountName <restored-account-name> `
+ -SourceDatabaseAccountName <sourceDatabaseAccountName> `
+ -RestoreTimestampInUtc <UTC time> `
+ -Location <Azure Region Name>
+
+```
+
+**Example 1:** Restoring the entire account:
+
+```azurepowershell
+
+Restore-AzCosmosDBAccount `
+ -TargetResourceGroupName "rg" `
+ -TargetDatabaseAccountName "pitrbb-ps-1" `
+ -SourceDatabaseAccountName "source-sql" `
+ -RestoreTimestampInUtc "2021-01-05T22:06:00" `
+ -Location "West US"
+
+```
+
+**Example 2:** Restoring specific collections and databases. This example restores the collections myCol1, myCol2 from myDB1 and the entire database myDB2, which, includes all the containers under it.
+
+```azurepowershell
+$datatabaseToRestore1 = New-AzCosmosDBDatabaseToRestore -DatabaseName "myDB1" -CollectionName "myCol1", "myCol2"
+$datatabaseToRestore2 = New-AzCosmosDBDatabaseToRestore -DatabaseName "myDB2"
+
+Restore-AzCosmosDBAccount `
+ -TargetResourceGroupName "rg" `
+ -TargetDatabaseAccountName "pitrbb-ps-1" `
+ -SourceDatabaseAccountName "source-sql" `
+ -RestoreTimestampInUtc "2021-01-05T22:06:00" `
+ -DatabasesToRestore $datatabaseToRestore1, $datatabaseToRestore2 `
+ -Location "West US"
+
+```
+
+## <a id="enumerate-sql-api"></a>Enumerate restorable resources for SQL API
+
+The enumeration cmdlets help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources.
+
+**List all the accounts that can be restored in the current subscription**
+
+Run the `Get-AzCosmosDBRestorableDatabaseAccount` PowerShell command to list all the accounts that can be restored in the current subscription.
+
+The response includes all the database accounts (both live and deleted) that can be restored and the regions that they can be restored from.
+
+```console
+{
+ "accountName": "sampleaccount",
+ "apiType": "Sql",
+ "creationTime": "2020-08-08T01:04:52.070190+00:00",
+ "deletionTime": null,
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/23e99a35-cd36-4df4-9614-f767a03b9995",
+ "identity": null,
+ "location": "West US",
+ "name": "23e99a35-cd36-4df4-9614-f767a03b9995",
+ "restorableLocations": [
+ {
+ "creationTime": "2020-08-08T01:04:52.945185+00:00",
+ "deletionTime": null,
+ "locationName": "West US",
+ "regionalDatabaseAccountInstanceId": "30701557-ecf8-43ce-8810-2c8be01dccf9"
+ },
+ {
+ "creationTime": "2020-08-08T01:15:43.507795+00:00",
+ "deletionTime": null,
+ "locationName": "East US",
+ "regionalDatabaseAccountInstanceId": "8283b088-b67d-4975-bfbe-0705e3e7a599"
+ }
+ ],
+ "tags": null,
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts"
+ },
+```
+
+Just like the "CreationTime" or "DeletionTime" for the account, there is a "CreationTime" or "DeletionTime" for the region too. These times allow you to choose the right region and a valid time range to restore into that region.
+
+**List all the versions of SQL databases in a live database account**
+
+Listing all the versions of databases allows you to choose the right database in a scenario where the actual time of existence of database is unknown.
+
+Run the following PowerShell command to list all the versions of databases. This command only works with live accounts. The "DatabaseAccountInstanceId" and the "LocationName" parameters are obtained from the "name" and "location" properties in the response of `Get-AzCosmosDBRestorableDatabaseAccount` cmdlet. The "DatabaseAccountInstanceId" attribute refers to "instanceId" property of source database account being restored:
++
+```azurepowershell
+
+Get-AzCosmosdbSqlRestorableDatabase `
+ -LocationName "East US" `
+ -DatabaseAccountInstanceId <DatabaseAccountInstanceId>
+
+```
+
+**List all the versions of SQL containers of a database in a live database account.**
+
+Use the following command to list all the versions of SQL containers. This command only works with live accounts. The "DatabaseRid" parameter is the "ResourceId" of the database you want to restore. It is the value of "ownerResourceid" attribute found in the response of `Get-AzCosmosdbSqlRestorableDatabase` cmdlet. The response also includes a list of operations performed on all the containers inside this database.
+
+```azurepowershell
+
+Get-AzCosmosdbSqlRestorableContainer `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68" `
+ -DatabaseRid "AoQ13r==" `
+ -LocationName "West US"
+
+```
+
+**Find databases or containers that can be restored at any given timestamp**
+
+Use the following command to get the list of databases or containers that can be restored at any given timestamp. This command only works with live accounts.
+
+```azurepowershell
+
+Get-AzCosmosdbSqlRestorableResource `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68" `
+ -LocationName "West US" `
+ -RestoreLocation "East US" `
+ -RestoreTimestamp "2020-07-20T16:09:53+0000"
+
+```
+
+## <a id="enumerate-mongodb-api"></a>Enumerate restorable resources for MongoDB
+
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. These commands only work for live accounts and they are similar to SQL API commands but with "MongoDB" in the command name instead of "sql".
+
+**List all the versions of MongoDB databases in a live database account**
+
+```azurepowershell
+
+Get-AzCosmosdbMongoDBRestorableDatabase `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68" `
+ -LocationName "West US"
+
+```
+
+**List all the versions of mongodb collections of a database in a live database account**
+
+```azurepowershell
+
+Get-AzCosmosdbMongoDBRestorableCollection `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68" `
+ -DatabaseRid "AoQ13r==" `
+ -LocationName "West US"
+```
+
+**List all the resources of a mongodb database account that are available to restore at a given timestamp and region**
+
+```azurepowershell
+
+Get-AzCosmosdbMongoDBRestorableResource `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68" `
+ -LocationName "West US" `
+ -RestoreLocation "West US" `
+ -RestoreTimestamp "2020-07-20T16:09:53+0000"
+```
+
+## Next steps
+
+* Configure and manage continuous backup using [Azure CLI](continuous-backup-restore-command-line.md), [Resource Manager](continuous-backup-restore-template.md), or [Azure portal](continuous-backup-restore-portal.md).
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-resource-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-resource-model.md
@@ -0,0 +1,208 @@
+
+ Title: Resource model for the Azure Cosmos DB point-in-time restore feature.
+description: This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored in Azure Cosmos DB API for SQL and MongoDB accounts.
+++ Last updated : 02/01/2021+++++
+# Resource model for the Azure Cosmos DB point-in-time restore feature (Preview)
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article explains the resource model for the Azure Cosmos DB point-in-time restore feature(Preview). It explains the parameters that support the continuous backup and resources that can be restored in Azure Cosmos DB API for SQL and MongoDB accounts.
+
+## Database account's resource model
+
+The database account's resource model is updated with a few extra properties to support the new restore scenarios. These properties are **BackupPolicy, CreateMode, and RestoreParameters.**
+
+### BackupPolicy
+
+A new property in the account level backup policy named "Type" under "backuppolicy" parameter enables continuous backup and point-in-time restore functionalities. This mode is called **continuous backup**. In the public preview, you can only set this mode when creating the account. After it's enabled, all the containers and databases created within this account will have continuous backup and point-in-time restore functionalities enabled by default.
+
+> [!NOTE]
+> Currently the point-in-time restore feature is in public preview and it's available for Azure Cosmos DB API for MongoDB, and SQL accounts. After you create an account with continuous mode you can't switch it to a periodic mode.
+
+### CreateMode
+
+This property indicates how the account was created. The possible values are "Default" and "Restore". To perform a restore, set this value to "Restore" and provide the appropriate values in the `RestoreParameters` property.
+
+### RestoreParameters
+
+The `RestoreParameters` resource contains the restore operation details including, the account ID, the time to restore, and resources that need to be restored.
+
+|Property Name |Description |
+|||
+|restoreMode | The restore mode should be "PointInTime" |
+|restoreSource | The instanceId of the source account from which the restore will be initiated. |
+|restoreTimestampInUtc | Point in time in UTC to which the account should be restored to. |
+|databasesToRestore | List of `DatabaseRestoreSource` objects to specify which databases and containers should be restored. If this value is empty, then the entire account is restored. |
+
+**DatabaseRestoreResource** - Each resource represents a single database and all the collections under that database.
+
+|Property Name |Description |
+|||
+|databaseName | The name of the database |
+| collectionNames| The list of containers under this database |
+
+### Sample resource
+
+The following JSON is a sample database account resource with continuous backup enabled:
+
+```json
+{
+ "location": "westus",
+ "properties": {
+ "databaseAccountOfferType": "Standard",
+ "locations": [
+ {
+ "failoverPriority": 0,
+ "locationName": "southcentralus",
+ "isZoneRedundant": false
+ }
+ ],
+ "createMode": "Restore",
+ "restoreParameters": {
+ "restoreMode": "PointInTime",
+ "restoreSource": "/subscriptions/subid/providers/Microsoft.DocumentDB/locations/westus/restorableDatabaseAccounts/1a97b4bb-f6a0-430e-ade1-638d781830cc",
+ "restoreTimestampInUtc": "2020-06-11T22:05:09Z",
+ "databasesToRestore": [
+ {
+ "databaseName": "db1",
+ "collectionNames": [
+ "collection1",
+ "collection2"
+ ]
+ },
+ {
+ "databaseName": "db2",
+ "collectionNames": [
+ "collection3",
+ "collection4"
+ ]
+ }
+ ]
+ },
+ "backupPolicy": {
+ "type": "Continuous"
+ },
+}
+}
+```
+
+## Restorable resources
+
+A set of new resources and APIs is available to help you discover critical information about resources, which can be restored, locations where they can be restored from, and the timestamps when key operations were performed on these resources.
+
+> [!NOTE]
+> All the API used to enumerate these resources require the following permissions:
+> * `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read`
+> * `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read`
+
+### Restorable database account
+
+This resource contains a database account instance that can be restored. The database account can either be a deleted or a live account. It contains information that allows you to find the source database account that you want to restore.
+
+|Property Name |Description |
+|||
+| ID | The unique identifier of the resource. |
+| accountName | The global database account name. |
+| creationTime | The time in UTC when the account was created. |
+| deletionTime | The time in UTC when the account was deleted. This value is empty if the account is live. |
+| apiType | The API type of the Azure Cosmos DB account. |
+| restorableLocations | The list of locations where the account existed. |
+| restorableLocations: locationName | The region name of the regional account. |
+| restorableLocations: regionalDatabaseAccountInstanceI | The GUID of the regional account. |
+| restorableLocations: creationTime | The time in UTC when the regional account was created.|
+| restorableLocations: deletionTime | The time in UTC when the regional account was deleted. This value is empty if the regional account is live.|
+
+To get a list of all restorable accounts, see [Restorable Database Accounts - list](restorable-database-accounts-list.md) or [Restorable Database Accounts- list by location](restorable-database-accounts-list-by-location.md) articles.
+
+### Restorable SQL database
+
+Each resource contains information of a mutation event such as creation and deletion that occurred on the SQL Database. This information can help in scenarios where the database was accidentally deleted and if you need to find out when that event happened.
+
+|Property Name |Description |
+|||
+| eventTimestamp | The time in UTC when the database is created or deleted. |
+| ownerId | The name of the SQL database. |
+| ownerResourceId | The resource ID of the SQL database|
+| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event is not initiated by the user</li></ul> |
+| database |The properties of the SQL database at the time of the event|
+
+To get a list of all database mutations, see [Restorable Sql Databases - List](restorable-sql-databases-list.md) article.
+
+### Restorable SQL container
+
+Each resource contains information of a mutation event such as creation and deletion that occurred on the SQL container. This information can help in scenarios where the container was modified or deleted, and if you need to find out when that event happened.
+
+|Property Name |Description |
+|||
+| eventTimestamp | The time in UTC when this container event happened.|
+| ownerId| The name of the SQL container.|
+| ownerResourceId | The resource ID of the SQL container.|
+| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event is not initiated by the user</li></ul> |
+| container | The properties of the SQL container at the time of the event.|
+
+To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](restorable-sql-containers-list.md) article.
+
+### Restorable SQL resources
+
+Each resource represents a single database and all the containers under that database.
+
+|Property Name |Description |
+|||
+| databaseName | The name of the SQL database.
+| collectionNames | The list of SQL containers under this database.|
+
+To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable Sql Resources - List](restorable-sql-resources-list.md) article.
+
+### Restorable MongoDB database
+
+Each resource contains information of a mutation event such as creation and deletion that occurred on the MongoDB Database. This information can help in the scenario where the database was accidentally deleted and user needs to find out when that event happened.
+
+|Property Name |Description |
+|||
+|eventTimestamp| The time in UTC when this database event happened.|
+| ownerId| The name of the MongoDB database. |
+| ownerResourceId | The resource ID of the MongoDB database. |
+| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user </li></ul> |
+
+To get a list of all database mutation, see [Restorable Mongodb Databases - List](restorable-mongodb-databases-list.md) article.
+
+### Restorable MongoDB collection
+
+Each resource contains information of a mutation event such as creation and deletion that occurred on the MongoDB Collection. This information can help in scenarios where the collection was modified or deleted, and user needs to find out when that event happened.
+
+|Property Name |Description |
+|||
+| eventTimestamp |The time in UTC when this collection event happened. |
+| ownerId| The name of the MongoDB collection. |
+| ownerResourceId | The resource ID of the MongoDB collection. |
+| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event is not initiated by the user</li></ul> |
+
+To get a list of all container mutations under the same database, see [Restorable Mongodb Collections - List](restorable-mongodb-collections-list.md) article.
+
+### Restorable MongoDB resources
+
+Each resource represents a single database and all the collections under that database.
+
+|Property Name |Description |
+|||
+| databaseName |The name of the MongoDB database. |
+| collectionNames | The list of MongoDB collections under this database. |
+
+To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [Restorable Mongodb Resources - List](restorable-mongodb-resources-list.md) article.
+
+## Next steps
+
+* Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-template.md
@@ -0,0 +1,108 @@
+
+ Title: Use ARM template to configure continuous backup and point in time restore in Azure Cosmos DB.
+description: Learn how to provision an account with continuous backup and restore data using Azure Resource Manager Templates.
+++ Last updated : 02/01/2021+++++
+# Configure and manage continuous backup and point in time restore (Preview) - using Azure Resource Manager templates
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Cosmos DB's point-in-time restore feature(Preview) helps you to recover from an accidental change within a container, to restore a deleted account, database, or a container or to restore into any region (where backups existed). The continuous backup mode allows you to do restore to any point of time within the last 30 days.
+
+This article describes how to provision an account with continuous backup and restore data using Azure Resource Manager Templates.
+
+## <a id="provision"></a>Provision an account with continuous backup
+
+You can use Azure Resource Manager templates to deploy an Azure Cosmos DB account with continuous mode. When defining the template to provision an account, include the "backupPolicy" parameter as shown in the following example:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "name": "ademo-pitr1",
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "apiVersion": "2016-03-31",
+ "location": "West US",
+ "properties": {
+ "locations": [
+ {
+ "locationName": "West US"
+ }
+ ],
+ "backupPolicy": {
+ "type": "Continuous"
+ },
+ "databaseAccountOfferType": "Standard"
+ }
+ }
+ ]
+}
+```
+
+Next deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
+
+```azurecli-interactive
+az group deployment create -g <ResourceGroup> --template-file <ProvisionTemplateFilePath>
+```
+
+## <a id="restore"></a>Restore using the Resource Manager template
+
+You can also restore an account using Resource Manager template. When defining the template include the following parameters:
+
+* Set the "createMode" parameter to "Restore"
+* Define the "restoreParameters", notice that the "restoreSource" value is extracted from the output of the `az cosmosdb restorable-database-account list` command for your source account. The Instance ID attribute for your account name is used to do the restore.
+* Set the "restoreMode" parameter to "PointInTime" and configure the "restoreTimestampInUtc" value.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "name": "vinhpitrarmrestore-kal3",
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "apiVersion": "2016-03-31",
+ "location": "West US",
+ "properties": {
+ "locations": [
+ {
+ "locationName": "West US"
+ }
+ ],
+ "databaseAccountOfferType": "Standard",
+ "createMode": "Restore",
+ "restoreParameters": {
+ "restoreSource": "/subscriptions/2296c272-5d55-40d9-bc05-4d56dc2d7588/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/6a18ecb8-88c2-4005-8dce-07b44b9741df",
+ "restoreMode": "PointInTime",
+ "restoreTimestampInUtc": "6/24/2020 4:01:48 AM"
+ }
+ }
+ }
+ ]
+}
+```
+
+Next deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
+
+```azurecli-interactive
+az group deployment create -g <ResourceGroup> --template-file <RestoreTemplateFilePath>
+```
+
+## Next steps
+
+* Configure and manage continuous backup using [Azure CLI](continuous-backup-restore-command-line.md), [PowerShell](continuous-backup-restore-command-line.md), or [Azure portal](continuous-backup-restore-portal.md).
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/index-policy.md
@@ -304,8 +304,7 @@ The following considerations are used when creating composite indexes to optimiz
| ```(age ASC, name ASC, timestamp ASC)``` | ```SELECT * FROM c WHERE c.age = 18 and c.name = "John" ORDER BY c.age ASC, c.name ASC,c.timestamp ASC``` | `Yes` | | ```(age ASC, name ASC, timestamp ASC)``` | ```SELECT * FROM c WHERE c.age = 18 and c.name = "John" ORDER BY c.timestamp ASC``` | `No` | -
-## Modifying the indexing policy
+## <index-transformation>Modifying the indexing policy
A container's indexing policy can be updated at any time [by using the Azure portal or one of the supported SDKs](how-to-manage-indexing-policy.md). An update to the indexing policy triggers a transformation from the old index to the new one, which is performed online and in-place (so no additional storage space is consumed during the operation). The old indexing policy is efficiently transformed to the new policy without affecting the write availability, read availability, or the throughput provisioned on the container. Index transformation is an asynchronous operation, and the time it takes to complete depends on the provisioned throughput, the number of items and their size.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/online-backup-and-restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/online-backup-and-restore.md
@@ -1,6 +1,6 @@
Title: Online backup and on-demand data restore in Azure Cosmos DB
-description: This article describes how automatic backup, on-demand data restore works, how to configure backup interval and retention, how to contacts support for a data restore in Azure Cosmos DB.
+ Title: Online backup and on-demand data restore in Azure Cosmos DB.
+description: This article describes how automatic backup, on-demand data restore works. It also explains the difference between continuous and periodic backup modes.
@@ -13,140 +13,20 @@
# Online backup and on-demand data restore in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. The automatic backups are helpful in scenarios when you accidentally delete or update your Azure Cosmos account, database, or container and later require the data recovery.
+Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service. The automatic backups are helpful in scenarios when you accidentally delete or update your Azure Cosmos account, database, or container and later require the data recovery. There are two backup modes:
-## Automatic and online backups
+* **Periodic backup mode** - This mode is the default backup mode for all existing accounts. In this mode, backup is taken at a periodic interval and the data is restored by creating a request with the support team. In this mode, you configure a backup interval and retention for your account. The maximum retention period extends to a month. The minimum backup interval can be one hour. To learn more, see the [Periodic backup mode](configure-periodic-backup-restore.md) article.
-With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters. The following steps show how Azure Cosmos DB performs data backup:
+* **Continuous backup mode** (currently in public preview) ΓÇô You choose this mode while creating the Azure Cosmos DB account. This mode allows you to do restore to any point of time within the last 30 days. To learn more, see the [Introduction to Continuous backup mode](continuous-backup-restore-introduction.md), configure continuous backup with [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), and [Resource Manager](continuous-backup-restore-template.md) articles.
-* Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given container or database for 30 days.
-
-* Azure Cosmos DB stores these backups in Azure Blob storage whereas the actual data resides locally within Azure Cosmos DB.
-
-* To guarantee low latency, the snapshot of your backup is stored in Azure Blob storage in the same region as the current write region (or **one** of the write regions, in case you have a multi-region write configuration). For resiliency against regional disaster, each snapshot of the backup data in Azure Blob storage is again replicated to another region through geo-redundant storage (GRS). The region to which the backup is replicated is based on your source region and the regional pair associated with the source region. To learn more, see the [list of geo-redundant pairs of Azure regions](../best-practices-availability-paired-regions.md) article. You cannot access this backup directly. Azure Cosmos DB team will restore your backup when you request through a support request.
-
- The following image shows how an Azure Cosmos container with all the three primary physical partitions in West US is backed up in a remote Azure Blob Storage account in West US and then replicated to East US:
-
- :::image type="content" source="./media/online-backup-and-restore/automatic-backup.png" alt-text="Periodic full backups of all Cosmos DB entities in GRS Azure Storage" border="false":::
-
-* The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any additional provisioned throughput (RUs) or affecting the performance and availability of your database.
-
-## <a id="configure-backup-interval-retention"></a>Modify the backup interval and retention period
-
-Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any additional cost. You can change the default backup interval and retention period during the Azure Cosmos account creation or after the account is created. The backup configuration is set at the Azure Cosmos account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. Currently you can change them backup options from Azure portal only.
-
-If you have accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
-
-Use the following steps to change the default backup options for an existing Azure Cosmos account:
-
-1. Sign into the [Azure portal.](https://portal.azure.com/)
-1. Navigate to your Azure Cosmos account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required.
-
- * **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a non-zero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval cannot be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken.
-
- * **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours.
-
- * **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There is an additional charge if you need more than two copies. See the Consumed Storage section in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for additional copies.
-
- :::image type="content" source="./media/online-backup-and-restore/configure-backup-interval-retention.png" alt-text="Configure backup interval and retention for an existing Azure Cosmos account" border="true":::
-
-If you configure backup options during the account creation, you can configure the **Backup policy**, which is either **Periodic** or **Continuous**. The periodic policy allows you to configure the Backup interval and Backup retention. The continuous policy is currently available by sign-up only. The Azure Cosmos DB team will assess your workload and approve your request.
--
-## Request data restore from a backup
-
-If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those. Azure support is not available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page.
-
-To restore a specific snapshot of the backup, Azure Cosmos DB requires that the data is available for the duration of the backup cycle for that snapshot.
-You should have the following details before requesting a restore:
-
-* Have your subscription ID ready.
-
-* Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It is advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases.
-
-* If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.
-
-* If one or more databases are deleted, you should provide the Azure Cosmos account, as well as the Azure Cosmos database names and specify if a new database with the same name exists.
-
-* If one or more containers are deleted, you should provide the Azure Cosmos account name, database names, and the container names. And specify if a container with the same name exists.
-
-* If you have accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](#configure-backup-interval-retention) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team will have enough time to restore your account.
-
-In addition to Azure Cosmos account name, database names, container names, you should specify the point in time to which the data can be restored to. It is important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
-
-The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide additional details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request.
--
-## Considerations for restoring the data from a backup
-
-You may accidentally delete or modify your data in one of the following scenarios:
-
-* Delete the entire Azure Cosmos account.
-
-* Delete one or more Azure Cosmos databases.
-
-* Delete one or more Azure Cosmos containers.
-
-* Delete or modify the Azure Cosmos items (for example, documents) within a container. This specific case is typically referred to as data corruption.
-
-* A shared offer database or containers within a shared offer database are deleted or corrupted.
-
-Azure Cosmos DB can restore data in all the above scenarios. When restoring, a new Azure Cosmos account is created to hold the restored data. The name of the new account, if it's not specified, will have the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a pre-created Azure Cosmos account.
-
-When you accidentally delete an Azure Cosmos account, we can restore the data into a new account with the same name, provided that the account name is not in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
-
-When you accidentally delete an Azure Cosmos database, we can restore the whole database or a subset of the containers within that database. It is also possible to select specific containers across databases and restore them to a new Azure Cosmos account.
-
-When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there is data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption.
-
-If you have accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team will have enough time to restore your account.
-
-> [!NOTE]
-> After you restore the data, not all the source capabilities or settings are carried over to the restored account. The following settings are not carried over to the new account:
-> * VNET access control lists
-> * Stored procedures, triggers and user-defined functions
-> * Multi-region settings
-
-If you provision throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore.
-
-## Required permissions to change retention or restore from the portal
-Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner or contributor are allowed to request a restore or change the retention period.
-
-## Understanding Costs of extra backups
-2 backups are provided free and extra backups are charged according to the region based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/en-us/pricing/details/cosmos-db/). For example if Backup Retention is configured to 240 hrs i.e., 10 days and Backup Interval to 24 hrs. This implies 10 copies of the backup data. Assuming 1 TB of data in West US 2, the would be 1000 * 0.12 ~ $ 120 for backup storage in given month.
--
-## Options to manage your own backups
-
-With Azure Cosmos DB SQL API accounts, you can also maintain your own backups by using one of the following approaches:
-
-* Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage of your choice.
-
-* Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage.
-
-## Post-restore actions
-
-The primary goal of the data restore is to recover the data that you have accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it is possible to use the restored account as your new active account, it's not a recommended option if you have production workloads.
-
-After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account will have the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a co-admin can see the restored account.
-
-### Migrate data to the original account
-
-The following are different ways to migrate data back to the original account:
-
-* Use the [Azure Cosmos DB data migration tool](import-data.md).
-* Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).
-* Use the [change feed](change-feed.md) in Azure Cosmos DB.
-* You can write your own custom code.
-
-It is advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they will incur cost for request units, storage, and egress.
+ > [!NOTE]
+ > If you configure a new account with continuous backup, you can do self-service restore via Azure portal, PowerShell, or CLI. If your account is configured in continuous mode, you canΓÇÖt switch it back to periodic mode. Currently existing accounts with periodic backup mode canΓÇÖt be changed into continuous mode.
## Next steps
-Next you can learn about how to restore data from an Azure Cosmos account or learn how to migrate data to an Azure Cosmos account
+Next you can learn about how to configure and manage periodic and continuous backup modes for your account:
-* To make a restore request, contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade)
-* [Use Cosmos DB change feed](change-feed.md) to move data to Azure Cosmos DB.
-* [Use Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data to Azure Cosmos DB.
+* [Configure and manage periodic backup](configure-periodic-backup-restore.md) policy.
+* What is [continuous backup](continuous-backup-restore-introduction.md) mode?
+* Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partitioning-overview.md
@@ -32,11 +32,14 @@ There is no limit to the number of logical partitions in your container. Each lo
A container is scaled by distributing data and throughput across physical partitions. Internally, one or more logical partitions are mapped to a single physical partition. Typically smaller containers have many logical partitions but they only require a single physical partition. Unlike logical partitions, physical partitions are an internal implementation of the system and they are entirely managed by Azure Cosmos DB.
-The number of physical partitions in your container depends on the following configuration:
+The number of physical partitions in your container depends on the following:
* The number of throughput provisioned (each individual physical partition can provide a throughput of up to 10,000 request units per second). * The total data storage (each individual physical partition can store up to 50GB data).
+> [!NOTE]
+> Physical partitions are an internal implementation of the system and they are entirely managed by Azure Cosmos DB. When developing your solutions, don't focus on physical partitions because you can't control them instead focus on your partition keys. If you choose a partition key that evenly distributes throughput consumption across logical partitions, you will ensure that throughput consumption across physical partitions is balanced.
+ There is no limit to the total number of physical partitions in your container. As your provisioned throughput or data size grows, Azure Cosmos DB will automatically create new physical partitions by splitting existing ones. Physical partition splits do not impact your application's availability. After the physical partition split, all data within a single logical partition will still be stored on the same physical partition. A physical partition split simply creates a new mapping of logical partitions to physical partitions. Throughput provisioned for a container is divided evenly among physical partitions. A partition key design that doesn't distribute requests evenly might result in too many requests directed to a small subset of partitions that become "hot." Hot partitions lead to inefficient use of provisioned throughput, which might result in rate-limiting and higher costs.
@@ -45,13 +48,10 @@ You can see your container's physical partitions in the **Storage** section of t
:::image type="content" source="./media/partitioning-overview/view-partitions-zoomed-out.png" alt-text="Viewing number of physical partitions" lightbox="./media/partitioning-overview/view-partitions-zoomed-in.png" :::
-In the above screenshot, a container has `/foodGroup` as the partition key. Each of the three bars in the graph represents a physical partition. In the image, **partition key range** is the same as a physical partition. The selected physical partition contains three logical partitions: `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies`.
+In the above screenshot, a container has `/foodGroup` as the partition key. Each of the three bars in the graph represents a physical partition. In the image, **partition key range** is the same as a physical partition. The selected physical partition contains the top 3 most significant size logical partitions: `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies`.
If you provision a throughput of 18,000 request units per second (RU/s), then each of the three physical partition can utilize 1/3 of the total provisioned throughput. Within the selected physical partition, the logical partition keys `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies` can, collectively, utilize the physical partition's 6,000 provisioned RU/s. Because provisioned throughput is evenly divided across your container's physical partitions, it's important to choose a partition key that evenly distributes throughput consumption by [choosing the right logical partition key](#choose-partitionkey).
-> [!NOTE]
-> If you choose a partition key that evenly distributes throughput consumption across logical partitions, you will ensure that throughput consumption across physical partitions is balanced.
- ## Managing logical partitions Azure Cosmos DB transparently and automatically manages the placement of logical partitions on physical partitions to efficiently satisfy the scalability and performance needs of the container. As the throughput and storage requirements of an application increase, Azure Cosmos DB moves logical partitions to automatically spread the load across a greater number of physical partitions. You can learn more about [physical partitions](partitioning-overview.md#physical-partitions).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/prevent-rate-limiting-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/prevent-rate-limiting-errors.md
@@ -40,9 +40,9 @@ az cosmosdb show --name accountname --resource-group resourcegroupname
```bash az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo DisableRateLimitingResponses ```
-The following command will **Disable** SSR for all collections in your database account. It may take up to 15min for this change to take effect.
+The following command will **Disable** SSR for all collections in your database account by removing "DisableRateLimitingResponses" from the capabilities list. It may take up to 15min for this change to take effect.
```bash
-az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo DisableRateLimitingResponses
+az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo
``` ## Frequently Asked Questions
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/restorable-database-accounts-list-by-location https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/restorable-database-accounts-list-by-location.md
@@ -0,0 +1,189 @@
+
+ Title: List restorable database accounts by location using Azure Cosmos DB REST API
+description: Lists all the restorable Azure Cosmos DB database accounts available under the subscription and in a region.
+++ Last updated : 02/03/2021+++
+# List restorable database accounts by location using Azure Cosmos DB REST API
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Lists all the restorable Azure Cosmos DB restorable database accounts available under the subscription and in a region. This call requires `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` permission.
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts?api-version=2020-06-01-preview
+
+```
+
+## URI parameters
+
+| Name | In | Required | Type | Description |
+| | | | | |
+| **location** | path | True | string| Azure Cosmos DB region, with spaces between words and each word capitalized. |
+| **subscriptionId** | path | True | string| The ID of the target subscription. |
+| **api-version** | query | True | string | The API version to use for this operation. |
+
+## Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [RestorableDatabaseAccountsListResult](#restorabledatabaseaccountslistresult)| The operation completed successfully. |
+| Other Status Codes | [DefaultErrorResponse](#defaulterrorresponse)| Error response describing why the operation failed. |
+
+## Examples
+
+### CosmosDBDatabaseAccountList
+
+**Sample request**
+
+```http
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts?api-version=2020-06-01-preview
+```
+
+**Sample response**
+
+Status code: 200
+
+```json
+{
+ "value": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d",
+ "name": "d9b26648-2f53-4541-b3d8-3044f4f9810d",
+ "location": "West US",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts",
+ "properties": {
+ "accountName": "ddb1",
+ "creationTime": "2020-04-11T21:56:15Z",
+ "deletionTime": "2020-06-12T22:05:09Z",
+ "apiType": "Sql",
+ "restorableLocations": [
+ {
+ "locationName": "South Central US",
+ "regionalDatabaseAccountInstanceId": "d7a01f78-606f-45c6-9dac-0df32f433bb5",
+ "creationTime": "2020-10-30T21:13:10Z",
+ "deletionTime": "2020-10-30T21:13:35Z"
+ },
+ {
+ "locationName": "West US",
+ "regionalDatabaseAccountInstanceId": "fdb43d84-1572-4697-b6e7-2bcda0c51b2c",
+ "creationTime": "2020-10-30T21:13:10Z"
+ }
+ ]
+ }
+ },
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/4f9e6ace-ac7a-446c-98bc-194c502a06b4",
+ "name": "4f9e6ace-ac7a-446c-98bc-194c502a06b4",
+ "location": "West US",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts",
+ "properties": {
+ "accountName": "ddb2",
+ "creationTime": "2020-05-01T08:05:18Z",
+ "apiType": "Sql",
+ "restorableLocations": [
+ {
+ "locationName": "South Central US",
+ "regionalDatabaseAccountInstanceId": "d7a01f78-606f-45c6-9dac-0df32f433bb5",
+ "creationTime": "2020-10-30T21:13:10Z",
+ "deletionTime": "2020-10-30T21:13:35Z"
+ },
+ {
+ "locationName": "West US",
+ "regionalDatabaseAccountInstanceId": "fdb43d84-1572-4697-b6e7-2bcda0c51b2c",
+ "creationTime": "2020-10-30T21:13:10Z"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+|Definition | Description|
+| | |
+| [ApiType](#apitype) | Enum to indicate the API type of the restorable database account. |
+| [DefaultErrorResponse](#defaulterrorresponse) | An error response from the service. |
+| [ErrorResponse](#errorresponse) | Error response. |
+| [RestorableDatabaseAccountGetResult](#restorabledatabaseaccountgetresult) | An Azure Cosmos DB restorable database account. |
+| [RestorableDatabaseAccountsListResult](#restorabledatabaseaccountslistresult) | The List operation response, that contains the restorable database accounts and their properties. |
+| [RestorableLocationResource](#restorablelocationresource) | Properties of the regional restorable account. |
+
+### <a id="apitype"></a>ApiType
+
+Enum to indicate the API type of the restorable database account.
+
+| **Name** | **Type** |
+| | |
+| Cassandra |string|
+| Gremlin |string|
+| GremlinV2 |string|
+| MongoDB |string|
+| Sql |string|
+| Table |string|
+
+### <a id="defaulterrorresponse"></a>DefaultErrorResponse
+
+An error response from the service.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| error | [ErrorResponse](#errorresponse)| Error response. |
+
+### <a id="errorresponse"></a>ErrorResponse
+
+Error Response.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| code |string| Error code. |
+| message |string| Error message indicating why the operation failed. |
+
+### <a id="restorabledatabaseaccountgetresult"></a>RestorableDatabaseAccountGetResult
+
+An Azure Cosmos DB restorable database account.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| ID |string| The unique resource identifier of the Azure Resource Manager resource. |
+| location |string| The location of the resource group to which the resource belongs. |
+| name |string| The name of the Resource Manager resource. |
+| properties.accountName |string| The name of the global database account. |
+| properties.apiType |[ApiType](#apitype)| The API type of the restorable database account. |
+| properties.creationTime |string| The creation time of the restorable database account (ISO-8601 format). |
+| properties.deletionTime |string| The time at which the restorable database account has been deleted (ISO-8601 format). |
+| properties.restorableLocations |[RestorableLocationResource](#restorablelocationresource)[]| List of regions where the of the database account can be restored from. |
+| type |string| The type of Azure resource. |
+
+### <a id="restorabledatabaseaccountslistresult"></a>RestorableDatabaseAccountsListResult
+
+The List operation response, that contains the restorable database accounts and their properties.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| value |[RestorableDatabaseAccountGetResult](#restorabledatabaseaccountgetresult)[]| List of restorable database accounts and their properties. |
+
+### <a id="restorablelocationresource"></a>RestorableLocationResource
+
+Properties of the regional restorable account.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| creationTime |string| The creation time of the regional restorable database account (ISO-8601 format). |
+| deletionTime |string| The time at which the regional restorable database account has been deleted (ISO-8601 format). |
+| locationName |string| The location of the regional restorable account. |
+| regionalDatabaseAccountInstanceId |string| The instance ID of the regional restorable account. |
+
+## Next steps
+
+* [List restorable collections](restorable-database-accounts-list-by-location.md) in Azure Cosmos DB API for MongoDB using REST API.
+* [Resource model](continuous-backup-restore-resource-model.md) of continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/restorable-database-accounts-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/restorable-database-accounts-list.md
@@ -0,0 +1,187 @@
+
+ Title: List restorable database accounts using Azure Cosmos DB REST API
+description: Lists all the restorable Azure Cosmos DB database accounts available under a subscription.
+++ Last updated : 02/03/2021+++
+# List restorable database accounts using Azure Cosmos DB REST API
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Lists all the restorable Azure Cosmos DB database accounts available under the subscription. This call requires `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` permission.
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/restorableDatabaseAccounts?api-version=2020-06-01-preview
+```
+
+## URI parameters
+
+| Name | In | Required | Type | Description |
+| | | | | |
+| **subscriptionId** | path | True | string| The ID of the target subscription. |
+| **api-version** | query | True | string | The API version to use for this operation. |
+
+## Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [RestorableDatabaseAccountsListResult](#restorabledatabaseaccountslistresult)| The operation completed successfully. |
+| Other Status Codes | [DefaultErrorResponse](#defaulterrorresponse)| Error response describing why the operation failed. |
+
+## Examples
+
+### CosmosDBDatabaseAccountList
+
+**Sample request**
+
+```http
+GET https://management.azure.com/subscriptions/subid/providers/Microsoft.DocumentDB/restorableDatabaseAccounts?api-version=2020-06-01-preview
+```
+
+**Sample response**
+
+Status code: 200
+
+```json
+{
+ "value": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d",
+ "name": "d9b26648-2f53-4541-b3d8-3044f4f9810d",
+ "location": "West US",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts",
+ "properties": {
+ "accountName": "ddb1",
+ "creationTime": "2020-04-11T21:56:15Z",
+ "deletionTime": "2020-06-12T22:05:09Z",
+ "apiType": "Sql",
+ "restorableLocations": [
+ {
+ "locationName": "South Central US",
+ "regionalDatabaseAccountInstanceId": "d7a01f78-606f-45c6-9dac-0df32f433bb5",
+ "creationTime": "2020-10-30T21:13:10Z",
+ "deletionTime": "2020-10-30T21:13:35Z"
+ },
+ {
+ "locationName": "West US",
+ "regionalDatabaseAccountInstanceId": "fdb43d84-1572-4697-b6e7-2bcda0c51b2c",
+ "creationTime": "2020-10-30T21:13:10Z"
+ }
+ ]
+ }
+ },
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/4f9e6ace-ac7a-446c-98bc-194c502a06b4",
+ "name": "4f9e6ace-ac7a-446c-98bc-194c502a06b4",
+ "location": "East US",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts",
+ "properties": {
+ "accountName": "ddb2",
+ "creationTime": "2020-05-01T08:05:18Z",
+ "apiType": "Sql",
+ "restorableLocations": [
+ {
+ "locationName": "South Central US",
+ "regionalDatabaseAccountInstanceId": "d7a01f78-606f-45c6-9dac-0df32f433bb5",
+ "creationTime": "2020-10-30T21:13:10Z",
+ "deletionTime": "2020-10-30T21:13:35Z"
+ },
+ {
+ "locationName": "West US",
+ "regionalDatabaseAccountInstanceId": "fdb43d84-1572-4697-b6e7-2bcda0c51b2c",
+ "creationTime": "2020-10-30T21:13:10Z"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+|Definition | Description|
+| | |
+| [ApiType](#apitype) | Enum to indicate the API type of the restorable database account. |
+| [DefaultErrorResponse](#defaulterrorresponse) | An error response from the service. |
+| [ErrorResponse](#errorresponse) | Error Response. |
+| [RestorableDatabaseAccountGetResult](#restorabledatabaseaccountgetresult) | An Azure Cosmos DB restorable database account. |
+| [RestorableDatabaseAccountsListResult](#restorabledatabaseaccountslistresult) | The List operation response, that contains the restorable database accounts and their properties. |
+| [RestorableLocationResource](#restorablelocationresource) | Properties of the regional restorable account. |
+
+### <a id="apitype"></a>ApiType
+
+Enum to indicate the API type of the restorable database account.
+
+| **Name** | **Type** |
+| | |
+| Cassandra |string|
+| Gremlin |string|
+| GremlinV2 |string|
+| MongoDB |string|
+| Sql |string|
+| Table |string|
+
+### <a id="defaulterrorresponse"></a>DefaultErrorResponse
+
+An error response from the service.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| error | [ErrorResponse](#errorresponse)| Error Response. |
+
+### <a id="errorresponse"></a>ErrorResponse
+
+Error Response.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| code |string| Error code. |
+| message |string| Error message indicating why the operation failed. |
+
+### <a id="restorabledatabaseaccountgetresult"></a>RestorableDatabaseAccountGetResult
+
+An Azure Cosmos DB restorable database account.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| ID |string| The unique resource identifier of the Azure Resource Manager resource. |
+| location |string| The location of the resource group to which the resource belongs. |
+| name |string| The name of the Resource Manager resource. |
+| properties.accountName |string| The name of the global database account |
+| properties.apiType |[ApiType](#apitype)| The API type of the restorable database account. |
+| properties.creationTime |string| The creation time of the restorable database account (ISO-8601 format). |
+| properties.deletionTime |string| The time at which the restorable database account has been deleted (ISO-8601 format). |
+| properties.restorableLocations |[RestorableLocationResource](#restorablelocationresource)[]| List of regions from which the database account can be restored. |
+| type |string| The type of Azure resource. |
+
+### <a id="restorabledatabaseaccountslistresult"></a>RestorableDatabaseAccountsListResult
+
+The List operation response that contains the restorable database accounts and their properties.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| value |[RestorableDatabaseAccountGetResult](#restorabledatabaseaccountgetresult)[]| List of restorable database accounts and their properties. |
+
+### <a id="restorablelocationresource"></a>RestorableLocationResource
+
+Properties of the regional restorable account.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| creationTime |string| The creation time of the regional restorable database account (ISO-8601 format). |
+| deletionTime |string| The time at which the regional restorable database account has been deleted (ISO-8601 format). |
+| locationName |string| The location of the regional restorable account. |
+| regionalDatabaseAccountInstanceId |string| The instance ID of the regional restorable account. |
+
+## Next steps
+
+* [Restorable database accounts - List by location.](restorable-database-accounts-list-by-location.md)
+* [Resource model](continuous-backup-restore-resource-model.md) of continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/restorable-mongodb-collections-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/restorable-mongodb-collections-list.md
@@ -0,0 +1,163 @@
+
+ Title: List restorable collections in Azure Cosmos DB MongoDB API - REST API
+description: Show the event feed of all mutations done on all the Azure Cosmos DB MongoDB collections under a specific database. This helps in scenario where container was accidentally deleted.
+++ Last updated : 02/03/2021+++
+# List restorable collections in Azure Cosmos DB API for MongoDB using REST API
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Show the event feed of all mutations done on all the Azure Cosmos DB API for MongoDB collections under a specific database. This helps in scenario where container was accidentally deleted. This API requires `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableMongodbCollections?api-version=2020-06-01-preview
+```
+
+With optional parameters:
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableMongodbCollections?api-version=2020-06-01-preview&amp;restorableMongodbDatabaseRid={restorableMongodbDatabaseRid}
+```
+
+## URI parameters
+
+| Name | In | Required | Type | Description |
+| | | | | |
+| **instanceId** | path | True |string| The instanceId GUID of a restorable database account. |
+| **location** | path | True | string| Azure Cosmos DB region, with spaces between words and each word capitalized. |
+| **subscriptionId** | path | True | string| The ID of the target subscription. |
+| **api-version** | query | True | string | The API version to use for this operation. |
+| **restorableMongodbDatabaseRid** | query | |string| The resource ID of the Azure Cosmos DB API for MongoDB database. |
+
+## Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [RestorableMongodbCollectionsListResult](#restorablemongodbcollectionslistresult)| The operation completed successfully. |
+| Other Status Codes | [DefaultErrorResponse](#defaulterrorresponse)| Error response describing why the operation failed.|
+
+## Examples
+
+### CosmosDBRestorableMongodbCollectionList
+
+**Sample request**
+
+```http
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/98a570f2-63db-4117-91f0-366327b7b353/restorableMongodbCollections?api-version=2020-06-01-preview&amp;restorableMongodbDatabaseRid=PD5DALigDgw=
+```
+
+**Sample response**
+
+Status code: 200
+
+```json
+{
+ "value": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDb/locations/westus/restorableDatabaseAccounts/98a570f2-63db-4117-91f0-366327b7b353/restorableMongodbCollections/79609a98-3394-41f8-911f-cfab0c075c86",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableMongodbCollections",
+ "name": "79609a98-3394-41f8-911f-cfab0c075c86",
+ "properties": {
+ "resource": {
+ "_rid": "zAyAPQAAAA==",
+ "eventTimestamp": "2020-10-13T04:56:42Z",
+ "ownerId": "Collection1",
+ "ownerResourceId": "V18LoLrv-qA=",
+ "operationType": "Create"
+ }
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+|Definition | Description|
+| | |
+| [DefaultErrorResponse](#defaulterrorresponse) | An error response from the service. |
+| [ErrorResponse](#errorresponse) | Error response. |
+| [OperationType](#operationtype) | Enum to indicate the operation type of the event. |
+| [Resource](#resource) | The resource of an Azure Cosmos DB API for MongoDB collection event |
+| [RestorableMongodbCollectionGetResult](#restorablemongodbcollectiongetresult) | An Azure Cosmos DB API for MongoDB collection event |
+| [RestorableMongodbCollectionProperties](#restorablemongodbcollectionproperties) | The properties of an Azure Cosmos DB API for MongoDB collection event |
+| [RestorableMongodbCollectionsListResult](#restorablemongodbcollectionslistresult) | The List operation response, that contains the collection events and their properties. |
+
+### <a id="defaulterrorresponse"></a>DefaultErrorResponse
+
+An error response from the service.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| error | [ErrorResponse](#errorresponse)| Error response. |
+
+### <a id="errorresponse"></a>ErrorResponse
+
+Error Response.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| code |string| Error code. |
+| message |string| Error message indicating why the operation failed. |
+
+### <a id="operationtype"></a>OperationType
+
+Enum to indicate the operation type of the event.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| Create |string|collection creation event|
+| Delete |string|collection deletion event|
+| Replace |string|collection modification event|
+
+### <a id="resource"></a>Resource
+
+The resource of an Azure Cosmos DB MongoDB collection event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| _rid |string| A system-generated property. A unique identifier. |
+| eventTimestamp |string| The time when this collection event happened. |
+| operationType |[OperationType](#operationtype)| The operation type of this collection event. |
+| ownerId |string| The name of the MongoDB collection.|
+| ownerResourceId |string| The resource ID of the MongoDB collection. |
+
+### <a id="restorablemongodbcollectiongetresult"></a>RestorableMongodbCollectionGetResult
+
+An Azure Cosmos DB MongoDB collection event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| ID |string| The unique resource Identifier of the Azure Resource Manager resource. |
+| name |string| The name of the Resource Manager resource. |
+| properties |[RestorableMongodbCollectionProperties](#restorablemongodbcollectionproperties)| The properties of a collection event. |
+| type |string| The type of Azure resource. |
+
+### <a id="restorablemongodbcollectionproperties"></a>RestorableMongodbCollectionProperties
+
+The properties of an Azure Cosmos DB MongoDB collection event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| resource | [Resource](#resource)| The resource of an Azure Cosmos DB API for MongoDB collection event |
+
+### <a id="restorablemongodbcollectionslistresult"></a>RestorableMongodbCollectionsListResult
+
+The List operation response, that contains the collection events and their properties.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| value |[RestorableMongodbCollectionGetResult](#restorablemongodbcollectiongetresult)[]| List of Azure Cosmos DB API for MongoDB collection events and their properties. |
+
+## Next steps
+
+* [List restorable databases](restorable-mongodb-databases-list.md) in Azure Cosmos DB API for MongoDB using REST API.
+* [Resource model](continuous-backup-restore-resource-model.md) of continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/restorable-mongodb-databases-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/restorable-mongodb-databases-list.md
@@ -0,0 +1,171 @@
+
+ Title: List restorable databases in Azure Cosmos DB API for MongoDB using REST API
+description: Show the event feed of all mutations done on all the Azure Cosmos DB MongoDB databases under the restorable account. This helps in scenario where database was accidentally deleted to get the deletion time.
+++ Last updated : 02/03/2021+++
+# List restorable databases in Azure Cosmos DB API for MongoDB using REST API
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Show the event feed of all mutations done on all the Azure Cosmos DB MongoDB databases under the restorable account. This helps in scenario where database was accidentally deleted to get the deletion time. This API requires `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableMongodbDatabases?api-version=2020-06-01-preview
+```
+
+## URI parameters
+
+| Name | In | Required | Type | Description |
+| | | | | |
+| **instanceId** | path | True |string| The instanceId GUID of a restorable database account. |
+| **location** | path | True | string| Azure Cosmos DB region, with spaces between words and each word capitalized. |
+| **subscriptionId** | path | True | string| The ID of the target subscription. |
+| **api-version** | query | True | string | The API version to use for this operation. |
+
+## Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [RestorableMongodbDatabasesListResult](#restorablemongodbdatabaseslistresult)| The operation completed successfully. |
+| Other Status Codes | [DefaultErrorResponse](#defaulterrorresponse)| Error response describing why the operation failed.|
+
+## Examples
+
+### CosmosDBRestorableMongodbDatabaseList
+
+**Sample request**
+
+```http
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d/restorableMongodbDatabases?api-version=2020-06-01-preview
+```
+
+**Sample response**
+
+Status code: 200
+
+```json
+{
+ "value": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDb/locations/westus/restorableDatabaseAccounts/36f09704-6be3-4f33-aa05-17b73e504c75/restorableMongodbDatabases/59c21367-b98b-4a8e-abb7-b6f46600decc",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableMongodbDatabases",
+ "name": "59c21367-b98b-4a8e-abb7-b6f46600decc",
+ "properties": {
+ "resource": {
+ "_rid": "DLB14gAAAA==",
+ "eventTimestamp": "2020-09-02T19:45:03Z",
+ "ownerId": "Database1",
+ "ownerResourceId": "PD5DALigDgw=",
+ "operationType": "Create"
+ }
+ }
+ },
+ {
+ "id": "/subscriptions/2296c272-5d55-40d9-bc05-4d56dc2d7588/providers/Microsoft.DocumentDb/locations/westus/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d/restorableMongodbDatabases/8456cb17-cdb0-4c6a-8db8-d0ff3f886257",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableMongodbDatabases",
+ "name": "8456cb17-cdb0-4c6a-8db8-d0ff3f886257",
+ "properties": {
+ "resource": {
+ "_rid": "ESXNLAAAAA==",
+ "eventTimestamp": "2020-09-02T19:53:42Z",
+ "ownerId": "Database1",
+ "ownerResourceId": "PD5DALigDgw=",
+ "operationType": "Delete"
+ }
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+|Definition | Description|
+| || |
+| [DefaultErrorResponse](#defaulterrorresponse) | An error response from the service. |
+| [ErrorResponse](#errorresponse) | Error Response. |
+| [OperationType](#operationtype) | Enum to indicate the operation type of the event. |
+| [Resource](#resource) | The resource of an Azure Cosmos DB API for MongoDB database event |
+| [RestorableMongodbDatabaseGetResult](#restorablemongodbdatabasegetresult) | An Azure Cosmos DB API for MongoDB database event |
+| [RestorableMongodbDatabaseProperties](#restorablemongodbdatabaseproperties) | The properties of an Azure Cosmos DB API for MongoDB database event |
+| [RestorableMongodbDatabasesListResult](#restorablemongodbdatabaseslistresult) | The List operation response, that contains the Azure Cosmos DB API for MongoDB database events and their properties. |
+
+### <a id="defaulterrorresponse"></a>DefaultErrorResponse
+
+An error response from the service.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| error | [ErrorResponse](#errorresponse)| Error Response. |
+
+### <a id="errorresponse"></a>ErrorResponse
+
+Error Response.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| code |string| Error code. |
+| message |string| Error message indicating why the operation failed. |
+
+### <a id="operationtype"></a>OperationType
+
+Enum to indicate the operation type of the event.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| Create |string|database creation event|
+| Delete |string|database deletion event|
+| Replace |string|database modification event|
+
+### <a id="resource"></a>Resource
+
+The resource of an Azure Cosmos DB API for MongoDB database event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| _rid |string| A system-generated property. A unique identifier. |
+| eventTimestamp |string| The time when this database event happened. |
+| operationType |[OperationType](#operationtype)| The operation type of this database event. |
+| ownerId |string| The name of the Azure Cosmos DB API for MongoDB database.|
+| ownerResourceId |string| The resource ID Azure Cosmos DB API for MongoDB database. |
+
+### <a id="restorablemongodbdatabasegetresult"></a>RestorableMongodbDatabaseGetResult
+
+An Azure Cosmos DB API for MongoDB database event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| ID |string| The unique resource Identifier of the Azure Resource Manager resource. |
+| name |string| The name of the Resource Manager resource. |
+| properties |[RestorableMongodbDatabaseProperties](#restorablemongodbdatabaseproperties)| The properties of a Azure Cosmos DB API for MongoDB database event. |
+| type |string| The type of Azure resource. |
+
+### <a id="restorablemongodbdatabaseproperties"></a>RestorableMongodbDatabaseProperties
+
+The properties of an Azure Cosmos DB API for MongoDB database event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| resource |[Resource](#resource)| The resource of an Azure Cosmos DB API for MongoDB database event |
+
+### <a id="restorablemongodbdatabaseslistresult"></a>RestorableMongodbDatabasesListResult
+
+The List operation response, that contains the database events and their properties.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| value |[RestorableMongodbDatabaseGetResult](#restorablemongodbdatabasegetresult)[]| List of database events and their properties. |
+
+## Next steps
+
+* [List restorable resources](restorable-mongodb-resources-list.md) in Azure Cosmos DB API for MongoDB using REST API.
+* [List restorable collections](restorable-mongodb-collections-list.md) in Azure Cosmos DB API for MongoDB using REST API.
+* [Resource model](continuous-backup-restore-resource-model.md) of continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/restorable-mongodb-resources-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/restorable-mongodb-resources-list.md
@@ -0,0 +1,133 @@
+
+ Title: List restorable resources in Azure Cosmos DB API for MongoDB using REST API
+description: Return a list of database and collection combo that exist on the account at the given timestamp and location. This helps in scenarios to validate what resources exist at given timestamp and location.
+++ Last updated : 02/03/2021+++
+# List restorable resources in Azure Cosmos DB API for MongoDB using REST API
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Return a list of database and collection combo that exist on the account at the given timestamp and location. This helps in scenarios to validate what resources exist at given timestamp and location. This API requires `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission.
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableMongodbResources?api-version=2020-06-01-preview
+```
+
+With optional parameters:
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableMongodbResources?api-version=2020-06-01-preview&amp;restoreLocation={restoreLocation}&amp;restoreTimestampInUtc={restoreTimestampInUtc}
+```
+
+## URI parameters
+
+| Name | In | Required | Type | Description |
+| | | | | |
+| **instanceId** | path | True |string| The instanceId GUID of a restorable database account. |
+| **location** | path | True | string| Azure Cosmos DB region, with spaces between words and each word capitalized. |
+| **subscriptionId** | path | True | string| The ID of the target subscription. |
+| **api-version** | query | True | string | The API version to use for this operation. |
+| **restoreLocation** | query | |string| The location where the restorable resources are located. |
+| **restoreTimestampInUtc** | query | |string| The timestamp when the restorable resources existed. |
+
+## Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [RestorableMongodbResourcesListResult](#restorablemongodbresourceslistresult)| The operation completed successfully. |
+| Other Status Codes | [DefaultErrorResponse](#defaulterrorresponse)| Error response describing why the operation failed. |
++
+## Examples
+
+### CosmosDBRestorableMongodbResourceList
+
+**Sample request**
+
+```http
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d/restorableMongodbResources?api-version=2020-06-01-preview&amp;restoreLocation=WestUS&amp;restoreTimestampInUtc=10/13/2020 4:56
+```
+
+**Sample response**
+
+Status code: 200
+
+```json
+{
+ "value": [
+ {
+ "databaseName": "Database1",
+ "collectionNames": [
+ "Collection1"
+ ]
+ },
+ {
+ "databaseName": "Database2",
+ "collectionNames": [
+ "Collection1",
+ "Collection2"
+ ]
+ },
+ {
+ "databaseName": "Database3",
+ "collectionNames": []
+ }
+ ]
+}
+```
+
+## Definitions
+
+|Definition | Description|
+| || |
+| [DatabaseRestoreResource](#databaserestoreresource) | Specific Databases to restore. |
+| [DefaultErrorResponse](#defaulterrorresponse) | An error response from the service. |
+| [ErrorResponse](#errorresponse) | Error Response. |
+| [RestorableMongodbResourcesListResult](#restorablemongodbresourceslistresult) | The List operation response, that contains the restorable SQL resources. |
+
+### <a id="databaserestoreresource"></a>DatabaseRestoreResource
+
+Specific Databases to restore.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| collectionNames |string[]| The names of the collections available for restore. |
+| databaseName |string| The name of the database available for restore. |
+
+### <a id="defaulterrorresponse"></a>DefaultErrorResponse
+
+An error response from the service.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| error | [ErrorResponse](#errorresponse)| Error Response. |
+
+### <a id="errorresponse"></a>ErrorResponse
+
+Error Response.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| code |string| Error code. |
+| message |string| Error message indicating why the operation failed. |
+
+### <a id="restorablemongodbresourceslistresult"></a>RestorableMongodbResourcesListResult
+
+The List operation response, that contains the restorable Azure Cosmos DB API for MongoDB resources.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| value |[DatabaseRestoreResource](#databaserestoreresource)[]| List of restorable Azure Cosmos DB API for MongoDB resources, including the database and collection names. |
+
+## Next steps
+
+* [List restorable databases](restorable-mongodb-databases-list.md) in Azure Cosmos DB API for MongoDB using REST API.
+* [Resource model](continuous-backup-restore-resource-model.md) of continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/restorable-sql-containers-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/restorable-sql-containers-list.md
@@ -0,0 +1,249 @@
+
+ Title: List restorable SQL API containers in Azure Cosmos DB using REST API
+description: Show the event feed of all mutations done on all the Azure Cosmos DB SQL containers under a specific database. This helps in scenario where container was accidentally deleted.
+++ Last updated : 02/03/2021+++
+# List restorable SQL API containers in Azure Cosmos DB using REST API
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Show the event feed of all mutations done on all the Azure Cosmos DB SQL containers under a specific database. This helps in scenario where container was accidentally deleted. This API requires `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableSqlContainers?api-version=2020-06-01-preview
+```
+
+With optional parameters:
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableSqlContainers?api-version=2020-06-01-preview&amp;restorableSqlDatabaseRid={restorableSqlDatabaseRid}
+```
+
+## URI parameters
+
+| Name | In | Required | Type | Description |
+| | | | | |
+| **instanceId** | path | True |string| The instanceId GUID of a restorable database account. |
+| **location** | path | True | string| Azure Cosmos DB region, with spaces between words and each word capitalized. |
+| **subscriptionId** | path | True | string| The ID of the target subscription. |
+| **api-version** | query | True | string | The API version to use for this operation. |
+| **restorableSqlDatabaseRid** | query | |string| The resource ID of the SQL database. |
+
+## Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [RestorableSqlContainersListResult](#restorablesqlcontainerslistresult)| The operation completed successfully. |
+| Other Status Codes | [DefaultErrorResponse](#defaulterrorresponse)| Error response describing why the operation failed. |
+
+## Examples
+
+### CosmosDBRestorableSqlContainerList
+
+**Sample request**
+
+```http
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/98a570f2-63db-4117-91f0-366327b7b353/restorableSqlContainers?api-version=2020-06-01-preview&amp;restorableSqlDatabaseRid=3fu-hg==
+```
+
+**Sample response**
+
+Status code: 200
+
+```json
+{
+ "value": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDb/locations/westus/restorableDatabaseAccounts/98a570f2-63db-4117-91f0-366327b7b353/restorableSqlContainers/79609a98-3394-41f8-911f-cfab0c075c86",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableSqlContainers",
+ "name": "79609a98-3394-41f8-911f-cfab0c075c86",
+ "properties": {
+ "resource": {
+ "_rid": "zAyAPQAAAA==",
+ "eventTimestamp": "2020-10-13T04:56:42Z",
+ "ownerId": "Container1",
+ "ownerResourceId": "V18LoLrv-qA=",
+ "operationType": "Create",
+ "container": {
+ "id": "Container1",
+ "indexingPolicy": {
+ "indexingMode": "Consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ },
+ {
+ "path": "/\"_ts\"/?"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/\"_etag\"/?"
+ }
+ ]
+ },
+ "conflictResolutionPolicy": {
+ "mode": "LastWriterWins",
+ "conflictResolutionPath": "/_ts",
+ "conflictResolutionProcedure": ""
+ },
+ "_rid": "V18LoLrv-qA=",
+ "_self": "dbs/V18LoA==/colls/V18LoLrv-qA=/",
+ "_etag": "\"00003e00-0000-0700-0000-5f85338a0000\""
+ }
+ }
+ }
+ },
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDb/locations/westus/restorableDatabaseAccounts/98a570f2-63db-4117-91f0-366327b7b353/restorableSqlContainers/e85298a1-c631-4726-825e-a7ca092e9098",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableSqlContainers",
+ "name": "e85298a1-c631-4726-825e-a7ca092e9098",
+ "properties": {
+ "resource": {
+ "_rid": "PrArcgAAAA==",
+ "eventTimestamp": "2020-10-13T05:03:27Z",
+ "ownerId": "Container1",
+ "ownerResourceId": "V18LoLrv-qA=",
+ "operationType": "Replace",
+ "container": {
+ "id": "Container1",
+ "indexingPolicy": {
+ "indexingMode": "Consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ },
+ {
+ "path": "/\"_ts\"/?"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/\"_etag\"/?"
+ }
+ ]
+ },
+ "defaultTtl": 12345,
+ "conflictResolutionPolicy": {
+ "mode": "LastWriterWins",
+ "conflictResolutionPath": "/_ts",
+ "conflictResolutionProcedure": ""
+ },
+ "_rid": "V18LoLrv-qA=",
+ "_self": "dbs/V18LoA==/colls/V18LoLrv-qA=/",
+ "_etag": "\"00004400-0000-0700-0000-5f85351f0000\""
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+|Definition | Description|
+| || |
+| [Container](#container) | Azure Cosmos DB SQL container resource object |
+| [DefaultErrorResponse](#defaulterrorresponse) | An error response from the service. |
+| [ErrorResponse](#errorresponse) | Error Response. |
+| [OperationType](#operationtype) | Enum to indicate the operation type of the event. |
+| [Resource](#resource) | The resource of an Azure Cosmos DB SQL container event |
+| [RestorableSqlContainerGetResult](#restorablesqlcontainergetresult) | An Azure Cosmos DB SQL container event |
+| [RestorableSqlContainerProperties](#restorablesqlcontainerproperties) | The properties of an Azure Cosmos DB SQL container event|
+| [RestorableSqlContainersListResult](#restorablesqlcontainerslistresult) | The List operation response, that contains the SQL container events and their properties. |
+
+### <a id="container"></a>Container
+
+Azure Cosmos DB SQL container resource object
+
+| **Name** | **Type** | **Description** |
+| || | |
+| _etag |string| A system-generated property representing the resource etag required for optimistic concurrency control. |
+| _rid |string| A system-generated property. A unique identifier. |
+| _self |string| A system-generated property that specifies the addressable path of the container resource. |
+| _ts | | A system-generated property that denotes the last updated timestamp of the resource. |
+| ID |string| Name of the Azure Cosmos DB SQL container |
+
+### <a id="defaulterrorresponse"></a>DefaultErrorResponse
+
+An error response from the service.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| error | [ErrorResponse](#errorresponse)| Error Response. |
+
+### <a id="errorresponse"></a>ErrorResponse
+
+Error Response.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| code |string| Error code. |
+| message |string| Error message indicating why the operation failed. |
+
+### <a id="operationtype"></a>OperationType
+
+Enum to indicate the operation type of the event.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| Create |string|container creation event|
+| Delete |string|container deletion event|
+| Replace |string|container modification event|
+| SystemOperation |string|container modification event triggered by the system. This event is not initiated by the user|
+
+### <a id="resource"></a>Resource
+
+The resource of an Azure Cosmos DB SQL container event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| _rid |string| A system-generated property. A unique identifier. |
+| container |[Container](#container)| Azure Cosmos DB SQL container resource object |
+| eventTimestamp |string| The time when this container event happened. |
+| operationType |[OperationType](#operationtype)| The operation type of this container event. |
+| ownerId |string| The name of the SQL container. |
+| ownerResourceId |string| The resource ID of the SQL container. |
+
+### <a id="restorablesqlcontainergetresult"></a>RestorableSqlContainerGetResult
+
+An Azure Cosmos DB SQL container event
+
+| **Name** | **Type** | **Description** |
+| | |
+| ID |string| The unique resource Identifier of the Azure Resource Manager resource. |
+| name |string| The name of the Azure Resource Manager resource. |
+| properties |[RestorableSqlContainerProperties](#restorablesqlcontainerproperties)| The properties of a SQL container event. |
+| type |string| The type of Azure resource. |
+
+### <a id="restorablesqlcontainerproperties"></a>RestorableSqlContainerProperties
+
+The properties of an Azure Cosmos DB SQL container event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| resource |[Resource](#resource)| The resource of an Azure Cosmos DB SQL container event |
+
+### <a id="restorablesqlcontainerslistresult"></a>RestorableSqlContainersListResult
+
+The List operation response, that contains the SQL container events and their properties.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| value |[RestorableSqlContainerGetResult](#restorablesqlcontainergetresult)[]| List of SQL container events and their properties. |
+
+## Next steps
+
+* [List restorable databases](restorable-mongodb-databases-list.md) in Azure Cosmos DB API for MongoDB using REST API.
+* [Resource model](continuous-backup-restore-resource-model.md) of continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/restorable-sql-databases-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/restorable-sql-databases-list.md
@@ -0,0 +1,226 @@
+
+ Title: List restorable SQL API databases in Azure Cosmos DB using REST API
+description: Show the event feed of all mutations done on all the Azure Cosmos DB SQL databases under the restorable account. This helps in scenario where database was accidentally deleted to get the deletion time.
+++ Last updated : 02/03/2021+++
+# List restorable SQL API databases in Azure Cosmos DB using REST API
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Show the event feed of all mutations done on all the Azure Cosmos DB SQL databases under the restorable account. This helps in scenario where database was accidentally deleted to get the deletion time. This API requires `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableSqlDatabases?api-version=2020-06-01-preview
+```
+
+## URI parameters
+
+| Name | In | Required | Type | Description |
+| | | | | |
+| **instanceId** | path | True |string| The instanceId GUID of a restorable database account. |
+| **location** | path | True | string| Azure Cosmos DB region, with spaces between words and each word capitalized. |
+| **subscriptionId** | path | True | string| The ID of the target subscription. |
+| **api-version** | query | True | string | The API version to use for this operation. |
+
+## Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [RestorableSqlDatabasesListResult](#restorablesqldatabaseslistresult)| The operation completed successfully. |
+| Other Status Codes | [DefaultErrorResponse](#defaulterrorresponse)| Error response describing why the operation failed. |
+
+## Examples
+
+### CosmosDBRestorableSqlDatabaseList
+
+**Sample Request**
+
+```http
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d/restorableSqlDatabases?api-version=2020-06-01-preview
+```
+
+**Sample response**
+
+Status code: 200
+
+```json
+{
+ "value": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDb/locations/westus/restorableDatabaseAccounts/36f09704-6be3-4f33-aa05-17b73e504c75/restorableSqlDatabases/59c21367-b98b-4a8e-abb7-b6f46600decc",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableSqlDatabases",
+ "name": "59c21367-b98b-4a8e-abb7-b6f46600decc",
+ "properties": {
+ "resource": {
+ "_rid": "DLB14gAAAA==",
+ "eventTimestamp": "2020-09-02T19:45:03Z",
+ "ownerId": "Database1",
+ "ownerResourceId": "3fu-hg==",
+ "operationType": "Create",
+ "database": {
+ "id": "Database1",
+ "_rid": "3fu-hg==",
+ "_self": "dbs/3fu-hg==/",
+ "_etag": "\"0000c20a-0000-0700-0000-5f4ff63f0000\"",
+ "_colls": "colls/",
+ "_users": "users/"
+ }
+ }
+ }
+ },
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDb/locations/westus/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d/restorableSqlDatabases/8456cb17-cdb0-4c6a-8db8-d0ff3f886257",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableSqlDatabases",
+ "name": "8456cb17-cdb0-4c6a-8db8-d0ff3f886257",
+ "properties": {
+ "resource": {
+ "_rid": "ESXNLAAAAA==",
+ "eventTimestamp": "2020-09-02T19:53:42Z",
+ "ownerId": "Database1",
+ "ownerResourceId": "3fu-hg==",
+ "database": {
+ "id": "Database1",
+ "_rid": "3fu-hg==",
+ "_self": "dbs/3fu-hg==/",
+ "_etag": "\"0000c20a-0000-0700-0000-5f4ff63f0000\"",
+ "_colls": "colls/",
+ "_users": "users/",
+ "_ts": 1599075903
+ },
+ "operationType": "Delete"
+ }
+ }
+ },
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDb/locations/westus/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d/restorableSqlDatabases/2c07991b-9c7c-4e85-be68-b18c1f2ff326",
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableSqlDatabases",
+ "name": "2c07991b-9c7c-4e85-be68-b18c1f2ff326",
+ "properties": {
+ "resource": {
+ "_rid": "aXFqUQAAAA==",
+ "eventTimestamp": "2020-09-02T19:53:15Z",
+ "ownerId": "Database2",
+ "ownerResourceId": "0SziSg==",
+ "database": {
+ "id": "Database2",
+ "_rid": "0SziSg==",
+ "_self": "dbs/0SziSg==/",
+ "_etag": "\"0000ca0a-0000-0700-0000-5f4ff82b0000\"",
+ "_colls": "colls/",
+ "_users": "users/"
+ },
+ "operationType": "Create"
+ }
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+|Definition | Description|
+| || |
+| [Database](#database) | Azure Cosmos DB SQL database resource object |
+| [DefaultErrorResponse](#defaulterrorresponse) | An error response from the service. |
+| [ErrorResponse](#errorresponse) | Error Response. |
+| [OperationType](#operationtype) | Enum to indicate the operation type of the event. |
+| [Resource](#resource) | The resource of an Azure Cosmos DB SQL database event |
+| [RestorableSqlDatabaseGetResult](#restorablesqldatabasegetresult) | An Azure Cosmos DB SQL database event |
+| [RestorableSqlDatabaseProperties](#restorablesqldatabaseproperties) | The properties of an Azure Cosmos DB SQL database event |
+| [RestorableSqlDatabasesListResult](#restorablesqldatabaseslistresult) | The List operation response, that contains the SQL database events and their properties. |
+
+### <a id="database"></a>Database
+
+Azure Cosmos DB SQL database resource object
+
+| **Name** | **Type** | **Description** |
+| || | |
+| _colls |string| A system-generated property that specified the addressable path of the collections resource. |
+| _etag |string| A system-generated property representing the resource etag required for optimistic concurrency control. |
+| _rid |string| A system-generated property. A unique identifier. |
+| _self |string| A system-generated property that specifies the addressable path of the database resource. |
+| _ts | | A system-generated property that denotes the last updated timestamp of the resource. |
+| _users |string| A system-generated property that specifies the addressable path of the user's resource. |
+| ID |string| Name of the Azure Cosmos DB SQL database |
+
+### <a id="defaulterrorresponse"></a>DefaultErrorResponse
+
+An error response from the service.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| error | [ErrorResponse](#errorresponse)| Error Response. |
+
+### <a id="errorresponse"></a>ErrorResponse
+
+Error Response.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| code |string| Error code. |
+| message |string| Error message indicating why the operation failed. |
+
+### <a id="operationtype"></a>OperationType
+
+Enum to indicate the operation type of the event.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| Create |string|database creation event|
+| Delete |string|database deletion event|
+| Replace |string|database modification event|
+| SystemOperation |string|database modification event triggered by the system. This event is not initiated by the user|
+
+### <a id="resource"></a>Resource
+
+The resource of an Azure Cosmos DB SQL database event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| _rid |string| A system-generated property. A unique identifier. |
+| database |[Database](#database)| Azure Cosmos DB SQL database resource object |
+| eventTimestamp |string| The time when this database event happened. |
+| operationType |[OperationType](#operationtype)| The operation type of this database event. |
+| ownerId |string| The name of the SQL database. |
+| ownerResourceId |string| The resource ID of the SQL database. |
+
+### <a id="restorablesqldatabasegetresult"></a>RestorableSqlDatabaseGetResult
+
+An Azure Cosmos DB SQL database event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| ID |string| The unique resource Identifier of the Azure Resource Manager resource. |
+| name |string| The name of the Azure Resource Manager resource. |
+| properties | [RestorableSqlDatabaseProperties](#restorablesqldatabaseproperties)| The properties of a SQL database event. |
+| type |string| The type of Azure resource. |
+
+### <a id="restorablesqldatabaseproperties"></a>RestorableSqlDatabaseProperties
+
+The properties of an Azure Cosmos DB SQL database event
+
+| **Name** | **Type** | **Description** |
+| | | |
+| resource |[Resource](#resource)| The resource of an Azure Cosmos DB SQL database event |
+
+### <a id="restorablesqldatabaseslistresult"></a>RestorableSqlDatabasesListResult
+
+The List operation response, that contains the SQL database events and their properties.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| value |[RestorableSqlDatabaseGetResult](#restorablesqldatabasegetresult)[]| List of SQL database events and their properties. |
+
+## Next steps
+
+* [List restorable containers](restorable-sql-containers-list.md) in Azure Cosmos DB SQL API using REST API.
+* [Resource model](continuous-backup-restore-resource-model.md) of continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/restorable-sql-resources-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/restorable-sql-resources-list.md
@@ -0,0 +1,132 @@
+
+ Title: List restorable SQL API resources in Azure Cosmos DB using REST API
+description: Return a list of database and container combo that exist on the account at the given timestamp and location. This helps in scenarios to validate what resources exist at given timestamp and location.
+++ Last updated : 02/03/2021+++
+# List restorable SQL API resources in Azure Cosmos DB using REST API
+
+> [!IMPORTANT]
+> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Return a list of database and container combo that exist on the account at the given timestamp and location. This helps in scenarios to validate what resources exist at given timestamp and location. This API requires `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission.
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableSqlResources?api-version=2020-06-01-preview
+```
+
+With optional parameters:
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DocumentDB/locations/{location}/restorableDatabaseAccounts/{instanceId}/restorableSqlResources?api-version=2020-06-01-preview&amp;restoreLocation={restoreLocation}&amp;restoreTimestampInUtc={restoreTimestampInUtc}
+```
+
+## URI parameters
+
+| Name | In | Required | Type | Description |
+| | | | | |
+| **instanceId** | path | True |string| The instanceId GUID of a restorable database account. |
+| **location** | path | True | string| Azure Cosmos DB region, with spaces between words and each word capitalized. |
+| **subscriptionId** | path | True | string| The ID of the target subscription. |
+| **api-version** | query | True | string | The API version to use for this operation. |
+| **restoreLocation** | query | |string| The location where the restorable resources are located. |
+| **restoreTimestampInUtc** | query | |string| The timestamp when the restorable resources existed. |
+
+## Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [RestorableSqlResourcesListResult](#restorablesqlresourceslistresult)| The operation completed successfully. |
+| Other Status Codes | [DefaultErrorResponse](#defaulterrorresponse)| Error response describing why the operation failed. |
+
+## Examples
+
+### CosmosDBRestorableSqlResourceList
+
+**Sample request**
+
+```http
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/d9b26648-2f53-4541-b3d8-3044f4f9810d/restorableSqlResources?api-version=2020-06-01-preview&amp;restoreLocation=WestUS&amp;restoreTimestampInUtc=10/13/2020 4:56
+```
+
+**Sample response**
+
+Status code: 200
+
+```json
+{
+ "value": [
+ {
+ "databaseName": "Database1",
+ "collectionNames": [
+ "Container1"
+ ]
+ },
+ {
+ "databaseName": "Database2",
+ "collectionNames": [
+ "Container1",
+ "Container2"
+ ]
+ },
+ {
+ "databaseName": "Database3",
+ "collectionNames": []
+ }
+ ]
+}
+```
+
+## Definitions
+
+|Definition | Description|
+| || |
+| [DatabaseRestoreResource](#databaserestoreresource) | Specific Databases to restore. |
+| [DefaultErrorResponse](#defaulterrorresponse) | An error response from the service. |
+| [ErrorResponse](#errorresponse) | Error Response. |
+| [RestorableSqlResourcesListResult](#restorablesqlresourceslistresult) | The List operation response, that contains the restorable SQL resources. |
+
+### <a id="databaserestoreresource"></a>DatabaseRestoreResource
+
+Specific Databases to restore.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| collectionNames |string[]| The names of the collections available for restore. |
+| databaseName |string| The name of the database available for restore. |
+
+### <a id="defaulterrorresponse"></a>DefaultErrorResponse
+
+An error response from the service.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| error | [ErrorResponse](#errorresponse)| Error Response. |
+
+### <a id="errorresponse"></a>ErrorResponse
+
+Error Response.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| code |string| Error code. |
+| message |string| Error message indicating why the operation failed. |
+
+### <a id="restorablesqlresourceslistresult"></a>RestorableSqlResourcesListResult
+
+The List operation response, that contains the restorable SQL resources.
+
+| **Name** | **Type** | **Description** |
+| | | |
+| value |[DatabaseRestoreResource](#databaserestoreresource)[]| List of restorable SQL resources, including the database and collection names. |
+
+## Next steps
+
+* [List restorable databases](restorable-sql-databases-list.md) in Azure Cosmos SQL API using REST API.
+* [Resource model](continuous-backup-restore-resource-model.md) of continuous backup mode.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/role-based-access-control.md
@@ -21,7 +21,8 @@ The following are the built-in roles supported by Azure Cosmos DB:
||| |[DocumentDB Account Contributor](../role-based-access-control/built-in-roles.md#documentdb-account-contributor)|Can manage Azure Cosmos DB accounts.| |[Cosmos DB Account Reader](../role-based-access-control/built-in-roles.md#cosmos-db-account-reader-role)|Can read Azure Cosmos DB account data.|
-|[Cosmos Backup Operator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator)|Can submit restore request for an Azure Cosmos database or a container. Cannot access any data or use Data Explorer.|
+|[Cosmos Backup Operator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator)| Can submit a restore request for Azure portal for a periodic backup enabled database or a container. Can modify the backup interval and retention on the Azure portal. Cannot access any data or use Data Explorer. |
+| [CosmosRestoreOperator](../role-based-access-control/built-in-roles.md) | Can perform restore action for Azure Cosmos DB account with continuous backup mode.|
|[Cosmos DB Operator](../role-based-access-control/built-in-roles.md#cosmos-db-operator)|Can provision Azure Cosmos accounts, databases, and containers. Cannot access any data or use Data Explorer.| > [!IMPORTANT]
@@ -31,7 +32,7 @@ The following are the built-in roles supported by Azure Cosmos DB:
The **Access control (IAM)** pane in the Azure portal is used to configure Azure role-based access control on Azure Cosmos resources. The roles are applied to users, groups, service principals, and managed identities in Active Directory. You can use built-in roles or custom roles for individuals and groups. The following screenshot shows Active Directory integration (Azure RBAC) using access control (IAM) in the Azure portal: ## Custom roles
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getcurrentdatetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-getcurrentdatetime.md
@@ -1,12 +1,12 @@
Title: GetCurrentDateTime in Azure Cosmos DB query language description: Learn about SQL system function GetCurrentDateTime in Azure Cosmos DB.-+ Previously updated : 08/18/2020- Last updated : 02/03/2021+ # GetCurrentDateTime (Azure Cosmos DB)
@@ -42,7 +42,8 @@ GetCurrentDateTime ()
GetCurrentDateTime() is a nondeterministic function. The result returned is UTC. Precision is 7 digits, with an accuracy of 100 nanoseconds.
-This system function will not utilize the index.
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
## Examples
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getcurrentticks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-getcurrentticks.md
@@ -5,7 +5,7 @@
Previously updated : 08/14/2020 Last updated : 02/03/2021
@@ -28,7 +28,8 @@ Returns a signed numeric value, the current number of 100-nanosecond ticks that
GetCurrentTicks() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
-This system function will not utilize the index.
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
## Examples
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getcurrenttimestamp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-getcurrenttimestamp.md
@@ -1,12 +1,12 @@
Title: GetCurrentTimestamp in Azure Cosmos DB query language description: Learn about SQL system function GetCurrentTimestamp in Azure Cosmos DB.-+ Previously updated : 08/19/2020- Last updated : 02/03/2021+ # GetCurrentTimestamp (Azure Cosmos DB)
@@ -28,7 +28,8 @@ Returns a signed numeric value, the current number of milliseconds that have ela
GetCurrentTimestamp() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
-This system function will not utilize the index.
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
## Examples
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-system-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-system-functions.md
@@ -5,7 +5,7 @@
Previously updated : 10/15/2020 Last updated : 02/03/2021
@@ -17,7 +17,7 @@
|Function group|Description|Operations| |--|--|--| |[Array functions](sql-query-array-functions.md)|The array functions perform an operation on an array input value and return numeric, Boolean, or array value. | [ARRAY_CONCAT](sql-query-array-concat.md), [ARRAY_CONTAINS](sql-query-array-contains.md), [ARRAY_LENGTH](sql-query-array-length.md), [ARRAY_SLICE](sql-query-array-slice.md) |
-|[Date and Time functions](sql-query-date-time-functions.md)|The date and time functions allow you to get the current UTC date and time in two forms; a numeric timestamp whose value is the Unix epoch in milliseconds or as a string which conforms to the ISO 8601 format. | [GetCurrentDateTime](sql-query-getcurrentdatetime.md), [GetCurrentTimestamp](sql-query-getcurrenttimestamp.md) |
+|[Date and Time functions](sql-query-date-time-functions.md)|The date and time functions allow you to get the current UTC date and time in two forms; a numeric timestamp whose value is the Unix epoch in milliseconds or as a string which conforms to the ISO 8601 format. | [GetCurrentDateTime](sql-query-getcurrentdatetime.md), [GetCurrentTimestamp](sql-query-getcurrenttimestamp.md), [GetCurrentTicks](sql-query-getcurrentticks.md) |
|[Mathematical functions](sql-query-mathematical-functions.md)|The mathematical functions each perform a calculation, usually based on input values that are provided as arguments, and return a numeric value. | [ABS](sql-query-abs.md), [ACOS](sql-query-acos.md), [ASIN](sql-query-asin.md), [ATAN](sql-query-atan.md), [ATN2](sql-query-atn2.md), [CEILING](sql-query-ceiling.md), [COS](sql-query-cos.md), [COT](sql-query-cot.md), [DEGREES](sql-query-degrees.md), [EXP](sql-query-exp.md), [FLOOR](sql-query-floor.md), [LOG](sql-query-log.md), [LOG10](sql-query-log10.md), [PI](sql-query-pi.md), [POWER](sql-query-power.md), [RADIANS](sql-query-radians.md), [RAND](sql-query-rand.md), [ROUND](sql-query-round.md), [SIGN](sql-query-sign.md), [SIN](sql-query-sin.md), [SQRT](sql-query-sqrt.md), [SQUARE](sql-query-square.md), [TAN](sql-query-tan.md), [TRUNC](sql-query-trunc.md) | |[Spatial functions](sql-query-spatial-functions.md)|The spatial functions perform an operation on a spatial object input value and return a numeric or Boolean value. | [ST_DISTANCE](sql-query-st-distance.md), [ST_INTERSECTS](sql-query-st-intersects.md), [ST_ISVALID](sql-query-st-isvalid.md), [ST_ISVALIDDETAILED](sql-query-st-isvaliddetailed.md), [ST_WITHIN](sql-query-st-within.md) | |[String functions](sql-query-string-functions.md)|The string functions perform an operation on a string input value and return a string, numeric or Boolean value. | [CONCAT](sql-query-concat.md), [CONTAINS](sql-query-contains.md), [ENDSWITH](sql-query-endswith.md), [INDEX_OF](sql-query-index-of.md), [LEFT](sql-query-left.md), [LENGTH](sql-query-length.md), [LOWER](sql-query-lower.md), [LTRIM](sql-query-ltrim.md), [REGEXMATCH](sql-query-regexmatch.md)[REPLACE](sql-query-replace.md), [REPLICATE](sql-query-replicate.md), [REVERSE](sql-query-reverse.md), [RIGHT](sql-query-right.md), [RTRIM](sql-query-rtrim.md), [STARTSWITH](sql-query-startswith.md), [StringToArray](sql-query-stringtoarray.md), [StringToBoolean](sql-query-stringtoboolean.md), [StringToNull](sql-query-stringtonull.md), [StringToNumber](sql-query-stringtonumber.md), [StringToObject](sql-query-stringtoobject.md), [SUBSTRING](sql-query-substring.md), [ToString](sql-query-tostring.md), [TRIM](sql-query-trim.md), [UPPER](sql-query-upper.md) |
@@ -29,7 +29,7 @@ If youΓÇÖre currently using a user-defined function (UDF) for which a built-in f
## Built-in versus ANSI SQL functions
-The main difference between Cosmos DB functions and ANSI SQL functions is that Cosmos DB functions are designed to work well with schemaless and mixed-schema data. For example, if a property is missing or has a non-numeric value like `unknown`, the item is skipped instead of returning an error.
+The main difference between Cosmos DB functions and ANSI SQL functions is that Cosmos DB functions are designed to work well with schemaless and mixed-schema data. For example, if a property is missing or has a non-numeric value like `undefined`, the item is skipped instead of returning an error.
## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/troubleshoot-not-found https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-not-found.md
@@ -97,7 +97,7 @@ Wait for the indexing to catch up or change the indexing policy.
The database or container that the item exists in was deleted. #### Solution:
-1. [Restore](./online-backup-and-restore.md#request-data-restore-from-a-backup) the parent resource, or re-create the resources.
+1. [Restore](./configure-periodic-backup-restore.md#request-restore) the parent resource, or re-create the resources.
1. Create a new resource to replace the deleted resource. ### 7. Container/Collection names are case-sensitive
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/troubleshoot-query-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-query-performance.md
@@ -200,13 +200,16 @@ Most system functions use indexes. Here's a list of some common string functions
- Left - Substring - but only if the first num_expr is 0
-Following are some common system functions that don't use the index and must load each document:
+Following are some common system functions that don't use the index and must load each document when used in a `WHERE` clause:
| **System function** | **Ideas for optimization** | | | |
-| UPPER/LOWER | Instead of using the system function to normalize data for comparisons, normalize the casing upon insertion. A query like ```SELECT * FROM c WHERE UPPER(c.name) = 'BOB'``` becomes ```SELECT * FROM c WHERE c.name = 'BOB'```. |
+| Upper/Lower | Instead of using the system function to normalize data for comparisons, normalize the casing upon insertion. A query like ```SELECT * FROM c WHERE UPPER(c.name) = 'BOB'``` becomes ```SELECT * FROM c WHERE c.name = 'BOB'```. |
+| GetCurrentDateTime/GetCurrentTimestamp/GetCurrentTicks | Calculate the current time before query execution and use that string value in the `WHERE` clause. |
| Mathematical functions (non-aggregates) | If you need to compute a value frequently in your query, consider storing the value as a property in your JSON document. |
+When used in the `SELECT` clause, inefficient system functions will not affect how queries can use indexes.
+ ### Improve string system function execution For some system functions that use indexes, you can improve query execution by adding an `ORDER BY` clause to the query.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-compute-module-simple https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-compute-module-simple.md
@@ -0,0 +1,150 @@
+
+ Title: Use an IoT Edge module to deploy compute workload on Azure Stack Edge Pro with GPU | Microsoft Docs
+description: Learn how to run a compute workload using a pre-created IoT Edge module on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 02/01/2021+
+Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
++
+# Tutorial: Run a compute workload with IoT Edge module on Azure Stack Edge Pro GPU
+
+<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
+
+This tutorial describes how to run a compute workload using an IoT Edge module on your Azure Stack Edge Pro GPU device. After you configure the compute, the device will transform the data before sending it to Azure.
+
+This procedure can take around 10 to 15 minutes to complete.
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure compute
+> * Add shares
+> * Add a compute module
+> * Verify data transform and transfer
+
+
+## Prerequisites
+
+Before you set up a compute role on your Azure Stack Edge Pro GPU device, make sure that:
+
+- You've activated your Azure Stack Edge Pro device as described in [Activate your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
+- You have an IoT Edge module that you can run on your data. In this tutorial, we used a `filemove2` module that moves data from Edge local share on your device to Edge share from where the data goes to Azure Storage account.
++
+## Configure compute
+++
+## Add shares
+
+For the simple deployment in this tutorial, you'll need two shares: one Edge share and another Edge local share.
+
+1. To add an Edge share on the device, do the following steps:
+
+ 1. In your Azure Stack Edge resource, go to **Cloud storage gateway > Shares**.
+ 2. From the command bar, select **+ Add share**.
+ 3. On the **Add share** blade, provide the share name and select the share type.
+ 4. To mount the Edge share, select the check box for **Use the share with Edge compute**.
+ 5. Select the **Storage account**, **Storage service**, an existing user, and then select **Create**.
+
+ ![Add an Edge share](./media/azure-stack-edge-gpu-deploy-compute-module-simple/add-edge-share-1.png)
++
+ > [!NOTE]
+ > To mount NFS share to compute, the compute network must be configured on same subnet as NFS Virtual IP address. For details on how to configure compute network, go to [Enable compute network on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
+
+ The Edge share is created, and you'll receive a successful creation notification. The share list might be updated, but you must wait for the share creation to be completed.
+
+2. To add an Edge local share on the device, repeat all the steps in the preceding step and select the check box for **Configure as Edge local share**. The data in the local share stays on the device.
+
+ ![Add an Edge local share](./media/azure-stack-edge-gpu-deploy-compute-module-simple/add-edge-share-2.png)
+
+ If you created a local NFS share, use the following remote sync (rsync) command option to copy files onto the share:
+
+ `rsync <source file path> < destination file path>`
+
+ For more information about the `rsync` command, go to [Rsync documentation](https://www.computerhope.com/unix/rsync.htm).
+
+3. Go to **Cloud storage gateway > Shares** to see the updated list of shares.
+
+ ![Updated list of shares](./media/azure-stack-edge-gpu-deploy-compute-module-simple/add-edge-share-3.png)
+
+
+## Add a module
+
+You could add a custom or a pre-built module. The device does not come with pre-built or custom modules. To learn how to create a custom module, go to [Develop a C# module for your Azure Stack Edge Pro device](azure-stack-edge-j-series-create-iot-edge-module.md).
+
+In this section, you add a custom module to the IoT Edge device that you created in [Develop a C# module for your Azure Stack Edge Pro](azure-stack-edge-j-series-create-iot-edge-module.md). This custom module takes files from an Edge local share on the Edge device and moves them to an Edge (cloud) share on the device. The cloud share then pushes the files to the Azure storage account that's associated with the cloud share.
+
+To add a module, do the following steps:
+
+1. Go to **IoT Edge > Modules**. From the command bar, select **+ Add module**.
+
+2. In the **Add module** blade, input the following values:
+
+
+ |Field |Value |
+ |||
+ |Name | A unique name for the module. This module is a docker container that you can deploy to the IoT Edge device that's associated with your Azure Stack Edge Pro. |
+ |Image URI | The image URI for the corresponding container image for the module. |
+ |Credentials required | If checked, username and password are used to retrieve modules with a matching URL. |
+ |Input share | Select an input share. The Edge local share is the input share in this case. The module used here moves files from the Edge local share to an Edge share where they are uploaded into the cloud. |
+ |Output share | Select an output share. The Edge share is the output share in this case. |
+ |Trigger type | Select from **File** or **Schedule**. A file trigger fires whenever a file event occurs such as a file is written to the input share. A scheduled trigger fires up based on a schedule defined by you. |
+ |Trigger name | A unique name for your trigger. |
+ |Environment variables| Optional information that will help define the environment in which your module will run. |
+
+ ![Add and configure module](./media/azure-stack-edge-gpu-deploy-compute-module-simple/add-module-1.png)
+
+3. Select **Add**. The module gets added. The **IoT Edge > Modules** page updates to indicate that the module is deployed. The runtime status of the module you added should be *running*.
+
+ ![Module deployed](./media/azure-stack-edge-gpu-deploy-compute-module-simple/add-module-2.png)
+
+### Verify data transform and transfer
+
+The final step is to ensure that the module is running and processing data as expected. The run-time status of the module should be running for your IoT Edge device in the IoT Hub resource.
+
+To verify that the module is running and processing data as expected, do the following:
++
+1. In File Explorer, connect to both the Edge local and Edge shares you created previously. See the steps
+
+ ![Connect to Edge local and Edge cloud shares](./media/azure-stack-edge-gpu-deploy-compute-module-simple/verify-data-1.png)
+
+1. Add data to the local share.
+
+ ![File copied to Edge local share](./media/azure-stack-edge-gpu-deploy-compute-module-simple/verify-data-2.png)
+
+ The data gets moved to the cloud share.
+
+ ![File moved to Edge cloud share](./media/azure-stack-edge-gpu-deploy-compute-module-simple/verify-data-3.png)
+
+ The data is then pushed from the cloud share to the storage account. To view the data, you can use Storage Explorer or Azure Storage in portal.
+
+ ![Verify data in storage account](./media/azure-stack-edge-gpu-deploy-compute-module-simple/verify-data-4.png)
+
+You have completed the validation process.
++
+## Next steps
+
+In this tutorial, you learned how to:
+
+> [!div class="checklist"]
+> * Configure compute
+> * Add shares
+> * Add a compute module
+> * Verify data transform and transfer
+
+To learn how to administer your Azure Stack Edge Pro device, see:
+
+> [!div class="nextstepaction"]
+> [Use local web UI to administer an Azure Stack Edge Pro](azure-stack-edge-manage-access-power-connectivity-mode.md)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-configure-compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
@@ -7,7 +7,7 @@
Previously updated : 01/05/2021 Last updated : 01/07/2021 Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
@@ -39,40 +39,7 @@ Before you set up a compute role on your Azure Stack Edge Pro device, make sure
## Configure compute
-To configure compute on your Azure Stack Edge Pro, you'll create an IoT Hub resource via the Azure portal.
-
-1. In the Azure portal of your Azure Stack Edge resource, go to **Overview**, and select **IoT Edge**.
-
- ![Get started with compute](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-1.png)
-
-2. In **Enable IoT Edge service**, select **Add**.
-
- ![Configure compute](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-2.png)
-
-3. On the **Configure Edge compute** blade, input the following information:
-
- |Field |Value |
- |||
- |IoT Hub | Choose from **New** or **Existing**. <br> By default, a Standard tier (S1) is used to create an IoT resource. To use a free tier IoT resource, create one and then select the existing resource. <br> In each case, the IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource. |
- |Name |Enter a name for your IoT Hub resource. |
-
- ![Get started with compute 2](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-3.png)
-
-4. When you finish the settings, select **Review + Create**. Review the settings for your IoT Hub resource, and select **Create**.
-
- Resource creation for an IoT Hub resource takes several minutes. After the resource is created, the **Overview** indicates the IoT Edge service is now running.
-
- ![Get started with compute 3](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-4.png)
-
-5. To confirm the Edge compute role has been configured, select **Properties**.
-
- ![Get started with compute 4](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-5.png)
-
- When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. An IoT Edge Runtime is also running on this IoT Edge device. At this point, only the Linux platform is available for your IoT Edge device.
-
-It can take 20-30 minutes to configure compute because, behind the scenes, virtual machines and a Kubernetes cluster are being created.
-
-After you have successfully configured compute in the Azure portal, a Kubernetes cluster and a default user associated with the IoT namespace (a system namespace controlled by Azure Stack Edge Pro) exist.
## Get Kubernetes endpoints
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-modify-fpga-modules-gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-modify-fpga-modules-gpu.md
@@ -0,0 +1,246 @@
+
+ Title: Modify IoT Edge modules on FPGA device to run on Azure Stack Edge Pro GPU device
+description: Describes what modifications are needed for existing IoT Edge modules on existing FPGA devices to run on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 02/01/2021+++
+# Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device
+
+This article details the changes needed for a docker-based IoT Edge module that runs on Azure Stack Edge Pro FPGA so it can run on a Kubernetes-based IoT Edge platform on Azure Stack Edge Pro GPU device.
+
+## About IoT Edge implementation
+
+The IoT Edge implementation is different on Azure Stack Edge Pro FPGA devices vs. that on Azure Stack Edge Pro GPU devices. For the GPU devices, Kubernetes is used as a hosting platform for IoT Edge. The IoT Edge on FPGA devices uses a docker-based platform. The IoT Edge docker-based application model is automatically translated to the Kubernetes native application model. However, some changes may still be needed as only a small subset of the Kubernetes application model is supported.
+
+If you are migrating your workloads from an FPGA device to a GPU device, you will need to make changes to the existing IoT Edge modules for those to run successfully on the Kubernetes platform. You may need to specify your storage, networking, resource usage, and web proxy requirements differently.
+
+## Storage
+
+Consider the following information when specifying storage for the IoT Edge modules.
+
+- Storage for containers on Kubernetes is specified using volume mounts.
+- Deployment on Kubernetes canΓÇÖt have binds for associating persistent storage or host paths.
+ - For persistent storage, use `Mounts` with type `volume`.
+ - For host paths, use `Mounts` with type `bind`.
+- For IoT Edge on Kubernetes, bind through `Mounts` works only for directory, and not for file.
+
+#### Example - Storage via volume mounts
+
+For IoT Edge on docker, host path bindings are used to map the shares on the device to paths inside the container. Here are the container create options used on FPGA devices:
+
+```
+{
+ "HostConfig":
+ {
+ "Binds":
+ [
+ "<Host storage path for Edge local share>:<Module storage path>"
+ ]
+ }
+}
+```
+<!-- is this how it will look on GPU device?-->
+For host paths for IoT Edge on Kubernetes, an example of using `Mounts` with type `bind` is shown here:
+```
+{
+ "HostConfig": {
+ "Mounts": [
+ {
+ "Target": "<Module storage path>",
+ "Source": "<Host storage path>",
+ "Type": "bind"
+ }
+ ]
+ }
+}
+```
++
+<!--following example is for persistent storage where we use mounts w/ type volume-->
+
+For the GPU devices running IoT Edge on Kubernetes, volume mounts are used to specify storage. To provision storage using shares, the value of `Mounts.Source` would be the name of the SMB or NFS share that was provisioned on your GPU device. The `/home/input` is the path at which the volume is accessible within the container. Here are the container create options used on the GPU devices:
+
+```
+{
+ "HostConfig": {
+ "Mounts": [
+ {
+ "Target": "/home/input",
+ "Source": "<nfs-or-smb-share-name-here>",
+ "Type": "volume"
+ },
+ {
+ "Target": "/home/output",
+ "Source": "<nfs-or-smb-share-name-here>",
+ "Type": "volume"
+ }]
+ }
+}
+```
+++
+## Network
+
+Consider the following information when specifying networking for the IoT Edge modules.
+
+- `HostPort` specification is required to expose a service both inside and outside the cluster.
+ - K8sExperimental options to limit exposure of service to cluster only.
+- Inter module communication requires `HostPort` specification, and connection using mapped port (and not using the container exposed port).
+- Host networking works with `dnsPolicy = ClusterFirstWithHostNet`, with that all containers (especially `edgeHub`) donΓÇÖt have to be on host network as well. <!--Need further clarifications on this one-->
+- Adding port mappings for TCP, UDP in same request doesnΓÇÖt work.
+
+#### Example - External access to modules
+
+For any IoT Edge modules that specify port bindings, an IP address is assigned using the Kubernetes external service IP range that was specified in the local UI of the device. There are no changes to the container create options between IoT Edge on docker vs IoT Edge on Kubernetes as shown in the following example.
+
+```json
+{
+ "HostConfig": {
+ "PortBindings": {
+ "5000/tcp": [
+ {
+ "HostPort": "5000"
+ }
+ ]
+ }
+ }
+}
+```
+
+However, to query the IP address assigned to your module, you can use the Kubernetes dashboard as described in [Get IP address for services or modules](azure-stack-edge-gpu-monitor-kubernetes-dashboard.md#get-ip-address-for-services-or-modules).
+
+Alternatively, you can [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and use the `iotedge` list command to list all the modules running on your device. The [Command output](azure-stack-edge-gpu-connect-powershell-interface.md#debug-kubernetes-issues-related-to-iot-edge) will also indicate the external IPs associated with the module.
++
+## Resource usage
+
+With the Kubernetes-based IoT Edge setups on GPU devices, the resources such as hardware acceleration, memory, and CPU requirements are specified differently than on the FPGA devices.
+
+#### Compute acceleration usage
+
+To deploy modules on FPGA, use the container create options <!--with Device Bindings--> as shown in the following config: <!--not sure where are device bindings in this config-->
+
+```json
+{
+ "HostConfig": {
+ "Privileged": true,
+ "PortBindings": {
+ "50051/tcp": [
+ {
+ "HostPort": "50051"
+ }
+ ]
+ }
+ },
+ "k8s-experimental": {
+ "resources": {
+ "limits": {
+ "microsoft.com/fpga_catapult": 2
+ },
+ "requests": {
+ "microsoft.com/fpga_catapult": 2
+ }
+ }
+ },
+ "Env": [
+ "WIRESERVER_ADDRESS=10.139.218.1"
+ ]
+}
+```
+
+
+<!--Note: The IP address assigned to your FPGA module's service can be used to send inferencing requests from outside the cluster OR your ML module can be used along with DBE Simple Module Flow by passing files to the module using an input share.-->
+
+For GPU, use resource request specifications instead of Device Bindings as shown in the following minimal configuration. You request nvidia resources instead of catapult, and you needn't specify the `wireserver`.
+
+```json
+{
+ "HostConfig": {
+ "Privileged": true,
+ "PortBindings": {
+ "50051/tcp": [
+ {
+ "HostPort": "50051"
+ }
+ ]
+ }
+ },
+ "k8s-experimental": {
+ "resources": {
+ "limits": {
+ "nvidia.com/gpu": 2
+ }
+ }
+}
+```
+
+#### Memory and CPU usage
+
+To set memory and CPU usage, use processor limits for modules in the `k8s-experimental` section. <!--can we verify if this is how we set limits of memory and CPU-->
+
+```json
+ "k8s-experimental": {
+ "resources": {
+ "limits": {
+ "memory": "128Mi",
+ "cpu": "500m",
+ "nvidia.com/gpu": 2
+ },
+ "requests": {
+ "nvidia.com/gpu": 2
+ }
+}
+```
+The memory and CPU specification are not necessary but generally good practice. If `requests` isn't specified, the values set in limits are used as the minimum required.
+
+Using shared memory for modules also requires a different way. <!-- should we give an example-->
+++
+## Web proxy
+
+Consider the following information when configuring web proxy:
+
+If you have web proxy configured in your network, configure the following environment variables for the `edgeHub` deployment on your docker-based IoT Edge setup on FPGA devices:
+
+- `https_proxy : <proxy URL>`
+- `UpstreamProtocol : AmqpWs` (unless the web proxy allows `Amqp` traffic)
+
+For the Kubernetes-based IoT Edge setups on GPU devices, you'll need to configure this additional variable during the deployment:
+
+- `no_proxy`: localhost
+
+- IoT Edge proxy on Kubernetes platform uses port 35000 and 35001. Make sure that your module does not run at these ports or it could cause port conflicts.
+
+## Other differences
+
+- **Deployment strategy**: You may need to change the deployment behavior for any updates to the module. The default behavior for IoT Edge modules is rolling update. This behavior prevents the updated module from restarting if the module is using resources such as hardware acceleration or network ports. This behavior can have unexpected effects, specially when dealing with persistent volumes on Kubernetes platform for the GPU devices. To override this default behavior, you can specify a `Recreate` in the `k8s-experimental` section in your module.
+
+ ```
+ {
+ "k8s-experimental": {
+ "strategy": {
+ "type": "Recreate"
+ }
+ }
+ }
+ ```
+
+- **Modules names**: Module names should follow Kubernetes naming conventions. You may need to rename the modules running on IoT Edge with Docker when you move those modules to IoT Edge with Kubernetes. For more information on naming, see [Kubernetes naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/).
+- **Other options**:
+ - Certain docker create options that worked on FPGA devices will not work in the Kubernetes environment on your GPU devices. For example: , like ΓÇô EntryPoint.<!--can we confirm what exactly is required here-->
+ - Environment variables such as `:` need to be replaced by `__`.
+ - **Container Creating** status for a Kubernetes pod leads to **backoff** status for a module on the IoT Hub resource. While there are a number of reasons for the pod to be in this status, a common reason is when a large container image is being pulled over a low network bandwidth connection. When the pod is in this state, the status of the module appears as **backoff** in IOT Hub though the module is just starting up.
++
+## Next steps
+
+- Learn more about how to [Configure GPU to use a module](azure-stack-edge-j-series-configure-gpu-modules.md).
databox https://docs.microsoft.com/en-us/azure/databox/data-box-deploy-export-picked-up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-export-picked-up.md
@@ -7,7 +7,7 @@
Previously updated : 12/18/2020 Last updated : 02/03/2021 # Customer intent: As an IT admin, I need to be able to return Data Box to upload on-premises data from my server onto Azure.
@@ -170,9 +170,27 @@ If needed, you can contact Quantium Solution Support (Japanese language) at the
Should you come across any issues, email Data Box Operations Asia [adbo@microsoft.com](mailto:adbo@microsoft.com) providing the job name in subject header and the issue encountered.
+## [United Arab Emirates](#tab/in-uae)
+
+1. Keep the original box used to ship the device for return shipment.
+2. Make sure the data copy to device is complete, and the **Prepare to ship** step completed successfully.
+3. Note the reference number on the **Prepare to ship** page of the device local web UI.
+4. Power off the device, and remove the cables. Spool and securely place the power cord that was provided with the device in the back of the device.
+6. Pack the device for return shipment in the original box.
+7. Email [Azure Data Box Operations](mailto:adbops@microsoft.com) to get an ID that will be used to identify the package when it arrives back at the datacenter.
+8. Write down this ID on the printed shipping label, next to the return address so that itΓÇÖs clearly visible.
+9. Book a pickup online by going to [DHL Express UAE](https://mydhl.express.dhl/ae/en/home.html#/schedulePickupTab) > **Schedule a Pickup**.
+ - Enter the reference number from the **Prepare to ship** page of the device local web UI in the waybill number field.
+ - Bookings are accepted from 9:00 AM ΓÇô 2:00 PM six days a week (excluding Fri and public holidays).
+ - Pickup requests should be placed at least 90 minutes before customer closing time.
+10. If you come across any issue with the DHL booking tool, you can contact DHL using any of these methods:
+ - Call 04-2924545.
+ - Email [ecom.ae@dhl.com](mailto:ecom.ae@dhl.com) with details of the issue(s), and put the waybill number in the Subject: line.
+ - Call DHL Customer Support at 600 567567.
+ ## [Self-Managed](#tab/in-selfmanaged)
-If you are using Data Box in Japan, Singapore, Korea, India, South Africa, or West Europe and have selected the self-managed shipping option during order creation, follow these instructions.
+If you are using Data Box in Japan, Singapore, Korea, India, South Africa, United Kingdom, West Europe, or Australia and have selected the self-managed shipping option during order creation, follow these instructions.
1. Note down the Authorization code shown on the Prepare to Ship page of the Data Box local web UI after this step successfully completes. 2. Power off the device and remove the cables. Spool and securely place the power cord that was provided with the device at the back of the device.
@@ -207,5 +225,3 @@ Advance to the next article to learn how to manage your Data Box.
> [!div class="nextstepaction"] > [Manage Data Box via Azure portal](./data-box-portal-admin.md)--
databox https://docs.microsoft.com/en-us/azure/databox/data-box-deploy-picked-up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-picked-up.md
@@ -7,7 +7,7 @@
Previously updated : 12/10/2020 Last updated : 02/02/2021 ms.localizationpriority: high
@@ -274,6 +274,35 @@ If you come across any issues, email Data Box Operations Asia [adbo@microsoft.co
## Erasure of data from Data Box ++
+## [United Arab Emirates](#tab/in-uae)
+
+1. Keep the original box used to ship the device for return shipment.
+2. Make sure the data copy to device is complete, and the **Prepare to ship** step completed successfully.
+3. Note the reference number on the **Prepare to ship** page of the device local web UI.
+4. Power off the device, and remove the cables. Spool and securely place the power cord that was provided with the device in the back of the device.
+6. Pack the device for return shipment in the original box.
+7. Email [Azure Data Box Operations](mailto:adbops@microsoft.com) to get an ID that will be used to identify the package when it arrives back at the datacenter.
+8. Write down this ID on the printed shipping label, next to the return address so that itΓÇÖs clearly visible.
+9. Book a pickup online by going to [DHL Express UAE](https://mydhl.express.dhl/ae/en/home.html#/schedulePickupTab) > **Schedule a Pickup**.
+ - Enter the reference number from the **Prepare to ship** page of the device local web UI in the waybill number field.
+ - Bookings are accepted from 9:00 AM ΓÇô 2:00 PM six days a week (excluding Fri and public holidays).
+ - Pickup requests should be placed at least 90 minutes before customer closing time.
+10. If you come across any issue with the DHL booking tool, you can contact DHL using any of these methods:
+ - Call 04-2924545.
+ - Email [ecom.ae@dhl.com](mailto:ecom.ae@dhl.com) with details of the issue(s), and put the waybill number in the Subject: line.
+ - Call DHL Customer Support at 600 567567.
++
+## Verify data upload to Azure
++
+## Erasure of data from Data Box
+
Once the upload to Azure is complete, the Data Box erases the data on its disks as per the [NIST SP 800-88 Revision 1 guidelines](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi). ::: zone-end
@@ -320,7 +349,7 @@ Once the upload to Azure is complete, the Data Box erases the data on its disks
## [Self-Managed](#tab/in-selfmanaged)
-If you're using Data Box in US Government, Japan, Singapore, Korea, India, South Africa, or West Europe, and you selected self-managed shipping when you created your order, follow these instructions.
+If you're using Data Box in US Government, Japan, Singapore, Korea, India, South Africa, United Kingdom, West Europe, or Australia, and you selected self-managed shipping when you created your order, follow these instructions.
1. Write down the Authorization code that's shown on the **Prepare to Ship** page of the local web UI for the Data Box after the step completes successfully. 2. Power off the device and remove the cables. Spool and securely place the power cord that was provided with the device at the back of the device.
databox https://docs.microsoft.com/en-us/azure/databox/data-box-disk-deploy-picked-up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-deploy-picked-up.md
@@ -7,7 +7,7 @@
Previously updated : 12/10/2020 Last updated : 02/02/2021 ms.localizationpriority: high
@@ -28,7 +28,7 @@ In this tutorial, you will learn how to:
## Prerequisites
-Before you begin, make sure that you have completed the [Tutorial: Copy data to Azure Data Box Disk and verify](data-box-disk-deploy-copy-data.md).
+Before you begin, make sure you've completed the [Tutorial: Copy data to Azure Data Box Disk and verify](data-box-disk-deploy-copy-data.md).
## Ship Data Box Disk back
@@ -61,7 +61,7 @@ Take the following steps if returning the device in US or Canada.
- Call the local UPS (country/region-specific toll free number). - In your call, quote the reverse shipment tracking number as shown in your printed label.
- - If the tracking number is not quoted, UPS will require you to pay an additional charge during pickup.
+ - If the tracking number isn't quoted, UPS will require you to pay an additional charge during pickup.
- Instead of scheduling the pickup, you can also drop off the Data Box Disk at the nearest drop-off location.
@@ -81,7 +81,7 @@ Take the following steps if returning the device in Europe or the UK.
Azure datacenters in Australia have an additional security notification. All the inbound shipments must have an advanced notification. Take the following steps for pickup in Australia.
-1. Use the provided return ship label and make sure that the TAU code (reference number) is written on it. If the provided shipping label is missing or you have any other issues, email [Data Box Asia Operations](mailto:adbo@microsoft.com). Provide the order name in subject header and details of the issue you are facing.
+1. Use the provided return ship label and make sure that the TAU code (reference number) is written on it. If the provided shipping label is missing or you have any other issues, email [Data Box Asia Operations](mailto:adbo@microsoft.com). Provide the order name in subject header and details of the issue.
2. Affix the label on the box. 3. Book a pickup online at the link https://mydhl.express.dhl/au/en/schedule-pickup.html#/schedule-pickup#label-reference.
@@ -117,13 +117,13 @@ If needed, you can contact Quantium Solution Support (Japanese language) at the
1. Make sure to include the return consignment note. 2. To request pickup when consignment note is present: 1. Call *Quantium Solutions International* hotline at 070-8231-1418 during office hours (10 AM to 5 PM, Monday to Friday). Quote *Microsoft Azure pickup* and the service request number to arrange for a collection.
- 2. If the hotline is busy, email `microsoft@rocketparcel.com`, with the email subject *Microsoft Azure Pickup* and the service request number as reference.
- 3. If the courier does not arrive for collection, call *Quantium Solutions International* hotline for alternate arrangements.
+ 2. If the hotline is busy, email [microsoft@rocketparcel.com](mailto:microsoft@rocketparcel.com), with the email subject *Microsoft Azure Pickup* and the service request number for reference.
+ 3. If the courier doesn't arrive for collection, call *Quantium Solutions International* hotline for alternate arrangements.
4. You receive an email confirmation for the pickup schedule.
-3. Do this step only if the consignment note is not present. To request pickup:
- 1. Call *Quantium Solutions International* hotline at 070-8231-1418 during office hours (10 AM to 5 PM, Monday to Friday). Quote *Microsoft Azure pickup* and the service request number to arrange for a collection. Specify that you need a new consignment note to arrange for a collection. Provide sender (customer), receiver information (Azure datacenter), and reference number (service request number).
- 2. If the hotline is busy, email `microsoft@rocketparcel.com`, with the email subject *Microsoft Azure Pickup* and the service request number as reference.
- 3. If the courier does not arrive for collection, call *Quantium Solutions International* hotline for alternate arrangements.
+3. Do this step only if the consignment note isn't present. To request pickup:
+ 1. Call *Quantium Solutions International* hotline at 070-8231-1418 during office hours (10 AM to 5 PM, Monday to Friday). Quote *Microsoft Azure pickup* and the service request number to arrange for a collection. Specify that you need a new consignment note to arrange for a collection. Provide sender (customer), receiver information (Azure datacenter), and reference number (service request number).
+ 2. If the hotline is busy, email [microsoft@rocketparcel.com](mailto:microsoft@rocketparcel.com), with the email subject *Microsoft Azure Pickup* and the service request number as reference.
+ 3. If the courier doesn't arrive for collection, call *Quantium Solutions International* hotline for alternate arrangements.
4. You receive a verbal confirmation if the request is made via telephone. ### [Singapore](#tab/in-singapore)
@@ -150,7 +150,7 @@ If needed, you can contact Quantium Solution Support (Japanese language) at the
> * Before 3 PM, pickup will be the next business day between 9 AM and 1 PM. > * After 3 PM, pickup will be the next business day between 2 PM to 6 PM.
- If you encounter any issues, kindly reach out to Data Box Operations Asia at adbo@microsoft.com. Provide the job name in the subject header and the issue encountered.
+ If you come across any issues, contact Data Box Operations Asia at [adbo@microsoft.com](mailto:adbo@microsoft.com). Provide the job name in the subject header and the issue encountered.
3. Hand over to the courier.
@@ -183,7 +183,7 @@ Take the following steps if returning the device in South Africa.
* Or drop off the package at the nearest DHL service point.
-5. If you encounter any issues, email [Priority.Support@dhl.com](mailto:Priority.Support@dhl.com) with details of the issue(s) you encountered and put the waybill number in the Subject: line. You can also call +27(0)119213902.
+5. If you come across any issues, email [Priority.Support@dhl.com](mailto:Priority.Support@dhl.com) with details of the issue(s), and put the waybill number in the Subject: line. You can also call +27(0)119213902.
### [China](#tab/in-china)
@@ -203,7 +203,7 @@ Take the following steps if returning the device in China.
3. Receive an email confirmation from FedEx after completion of booking pickup.ΓÇ»
-4. If you encounter any issues, email [DL-DC-SHA@oe.21vianet.com](mailto:DL-DC-SHA@oe.21vianet.com) with details of the issue encountered and subject mentioning order name.
+4. If you come across any issues, email [DL-DC-SHA@oe.21vianet.com](mailto:DL-DC-SHA@oe.21vianet.com) with details of the issue(s), and put the order name in the Subject: line.
#### Premier Customer Care contact information
@@ -227,20 +227,20 @@ Take the following steps if returning the device in China.
### [Self-Managed](#tab/in-selfmanaged)
-If you are using Data Box Disk in US Government, Japan, Singapore, Korea, West Europe, South Africa or India and have selected the self-managed shipping option during order creation, follow these instructions.
+If you are using Data Box Disk in US Government, Japan, Singapore, Korea, United Kingdom, West Europe, Australia, South Africa, or India and have selected the self-managed shipping option during order creation, follow these instructions.
1. Go to the **Overview** blade for your order in the Azure portal. Go through the instructions displayed when you select **Schedule pickup**. You should see an Authorization code that is used at the time of dropping off the order.
-2. Send an email to the Azure Data Box Operations team using the following template when you are ready to return the device.
+2. Send an email to the Azure Data Box Operations team using the following template when you're ready to return the device.
``` To: adbops@microsoft.com Subject: Request for Azure Data Box Disk drop-off for order: 'orderName' Body: a. Order name
- b. Contact name of the person dropping off. You will need to display a Government approved ID during the drop off.
+ b. Contact name of the person dropping off. You will need to display a Government approved ID during the drop-off.
```
-3. Azure Data Box Operations team will work with you to arrange the drop-off to the Azure Datacenter.
+3. Azure Data Box Operations team will work with you to arrange the drop-off to the Azure datacenter.
@@ -254,7 +254,7 @@ In this tutorial, you learned about Azure Data Box Disk topics such as:
> > * Ship Data Box Disk to Microsoft
-Advance to the next how-to to learn how to verify data upload from Data Box Disk to the Azure Storage account.
+Advance to the next how-to to learn how to verify data upload from Data Box Disk to the Azure storage account.
> [!div class="nextstepaction"] > [Verify data upload from Azure Data Box Disk](./data-box-disk-deploy-upload-verify.md)
databox https://docs.microsoft.com/en-us/azure/databox/data-box-disk-portal-customer-managed-shipping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-portal-customer-managed-shipping.md
@@ -7,7 +7,7 @@
Previously updated : 05/20/2020 Last updated : 02/02/2021
@@ -20,7 +20,9 @@ This article describes self-managed shipping tasks to order, pick-up, and drop-o
Self-managed shipping is available as an option when you [Order Azure Data Box Disk](data-box-disk-deploy-ordered.md). Self-managed shipping is only available in the following regions: * US Government
+* United Kingdom
* Western Europe
+* Australia
* Japan * Singapore * South Korea
@@ -37,7 +39,7 @@ When you place a Data Box Disk order, you can choose self-managed shipping optio
2. When choosing shipping type, select the **Self-managed shipping** option. This option is only available if you are in a supported region as described in the prerequisites.
-3. Once you have provided your shipping address, you will need to validate it and complete your order.
+3. After you provide your shipping address, you'll need to validate it and complete your order.
![Screenshot of the Add Shipping Address dialog box with the Ship using options and the Add shipping address option called out.](media\data-box-portal-customer-managed-shipping\choose-self-managed-shipping-2.png)
@@ -49,34 +51,34 @@ When you place a Data Box Disk order, you can choose self-managed shipping optio
![Schedule pickup](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-user-pickup-02c.png)
-6. After you have scheduled your device pickup, you will be able to view your authorization code in the **Schedule pickup for Azure**.
+6. After you've scheduled your device pickup, you can view your authorization code in **Schedule pickup for Azure**.
![Screenshot of the Schedule pick up for Azure dialog box with the Authorization code for Pickup text box called out.](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-authcode-01b.png)
- Make a note of this **Authorization code**.
+ Make a note of this authorization code.
- As per the security requirements, at the time of scheduling pick-up, it is necessary to present the name of the person who will be arriving for pick-up.
+ As per security requirements, at the time of scheduling pick-up, it's necessary to present the name of the person who will be arriving for pick-up.
- You also need to provide details of who will be going to the datacenter for pickup. You or the point of contact must carry a Government approved photo ID that will be validated at the datacenter.
+ You also need to provide details of who will go to the datacenter for the pick-up. You or the point of contact must carry a government-approved photo ID that will be validated at the datacenter.
- Additionally, the person who is picking up the device also needs to have the **Authorization code**. The authorization code is unique for a pickup or a drop off and is validated at the datacenter.
+ The person who is picking up the device also needs to have the authorization code. The authorization code is unique for a pick-up or a drop-off and is validated at the datacenter.
-7. Your order automatically moves to the **Picked up** state once the device has been picked up from the datacenter.
+7. Your order automatically moves to the **Picked up** state after the device is picked up from the datacenter.
![Picked up](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-ready-disk-01b.png) 8. After the device is picked up, you may copy data to the Data Box Disk(s) at your site. After the data copy is complete, you can prepare to ship the Data Box Disk.
- Once you have finished data copy, you will need to contact operations to schedule an appointment for the drop off. You will need to share the details of the person coming to the datacenter to drop off the disks. The datacenter will also need to verify the authorization code at the time of drop-off. The authorization code for drop off will be available in Azure portal under **Schedule drop off**.
+ After you finish the data copy, contact operations to schedule an appointment for the drop-off. You'll need to share the details of the person coming to the datacenter to drop off the disks. The datacenter will also need to verify the authorization code at the time of drop-off. You'll find the authorization code for drop-off in the Azure portal under **Schedule drop off**.
> [!NOTE]
- > Do not share the authorization code over email. This is only to be verified at the datacenter during drop off.
+ > Do not share the authorization code over email. This is only to be verified at the datacenter during drop-off.
-9. If you have received an appointment for drop off the order should now be at the **Ready to receive at Azure datacenter** state in the Azure portal.
+9. After you receive an appointment for drop-off, the order should be in the **Ready to receive at Azure datacenter** state in the Azure portal.
![Screenshot of the Add Shipping Address dialog box with the Ship using options out and the Add shipping address option called out.](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-authcode-dropoff-02b.png)
-10. After your ID and authorization code have been verified and you have dropped off the device at the datacenter, the order status should be **Received**.
+10. After your ID and authorization code have been verified, and you've dropped off the device at the datacenter, the order status should be **Received**.
![Received Complete](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-received-01a.png)
databox https://docs.microsoft.com/en-us/azure/databox/data-box-portal-customer-managed-shipping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-portal-customer-managed-shipping.md
@@ -7,7 +7,7 @@
Previously updated : 08/12/2020 Last updated : 02/02/2021
@@ -20,12 +20,14 @@ This article describes self-managed shipping tasks to order, pick up, and drop-o
Self-managed shipping is available as an option when you [Order Azure Data Box](data-box-deploy-ordered.md). Self-managed shipping is only available in the following regions: * US Government
+* United Kingdom
* Western Europe * Japan * Singapore * South Korea * India * South Africa
+* Australia
## Use self-managed shipping
@@ -37,7 +39,7 @@ When you place a Data Box order, you can choose the self-managed shipping option
2. When choosing a shipping type, select the **Self-managed shipping** option. This option is only available if you are in a supported region as described in the prerequisites.
-3. Once you have provided your shipping address, you will need to validate it and complete your order.
+3. Once you've provided your shipping address, you'll need to validate it and complete your order.
![Self-managed shipping, validate and add address](media\data-box-portal-customer-managed-shipping\choose-self-managed-shipping-2.png)
@@ -53,11 +55,11 @@ When you place a Data Box order, you can choose the self-managed shipping option
![Schedule pickup for Azure instructions](media\data-box-portal-customer-managed-shipping\data-box-portal-schedule-pickup-email-01.png)
-6. After you have scheduled your device pickup, you will be able to view your device authorization code in the **Schedule pickup for Azure** pane.
+6. After you schedule your device pickup, you can view your device authorization code in the **Schedule pickup for Azure** pane.
![Viewing your device authorization code](media\data-box-portal-customer-managed-shipping\data-box-portal-auth-01b.png)
- Make a note of this **Authorization code**. As per the security requirements, at the time of scheduling pick-up, it is necessary to present the name of the person who would arrive for pick-up.
+ Make a note of this **Authorization code**. As per security requirements, at the time of scheduling pick-up, it's necessary to present the name of the person who would arrive for pick-up.
You also need to provide details of who will be going to the datacenter for pickup. You or the point of contact must carry a Government approved photo ID that will be validated at the datacenter.
@@ -69,16 +71,16 @@ When you place a Data Box order, you can choose the self-managed shipping option
8. After the device is picked up, copy data to the Data Box at your site. After the data copy is complete, you can prepare to ship the Data Box. For more information, see [Prepare to ship](data-box-deploy-picked-up.md#prepare-to-ship).
- The **Prepare to ship** step needs to complete without any critical errors, otherwise you will need to run this step again after making the necessary fixes. After the prepare to ship completes successfully, you can view the authorization code for the drop off on the device local user interface.
+ The **Prepare to ship** step needs to complete without any critical errors. Otherwise, you'll need to run this step again after making the necessary fixes. After the **Prepare to ship** step completes successfully, you can view the authorization code for the drop-off on the device local user interface.
> [!NOTE] > Do not share the authorization code over email. This is only to be verified at the datacenter during drop off.
-9. If you have received an appointment for drop off, the order should have **Ready to receive at Azure datacenter** status in the Azure portal. Follow the instructions under **Schedule drop-off** to return the device.
+9. If you've received an appointment for drop-off, the order should have **Ready to receive at Azure datacenter** status in the Azure portal. Follow the instructions under **Schedule drop-off** to return the device.
![Instructions for device drop-off](media\data-box-portal-customer-managed-shipping\data-box-portal-received-complete-02b.png)
-10. After your ID and authorization code are verified and you have dropped off the device at the datacenter, the order status should be **Received**.
+10. After your ID and authorization code are verified, and you have dropped off the device at the datacenter, the order status should be **Received**.
![An order with Received status](media\data-box-portal-customer-managed-shipping\data-box-portal-received-complete-01.png)
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-protection-partner-onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-partner-onboarding.md
@@ -89,3 +89,4 @@ The following steps are required for partners to configure integration with Azur
View existing partner integrations: - [Barracuda WAF-as-a-service](https://www.barracuda.com/waf-as-a-service)
+- [Azure Cloud WAF from Radware](https://www.radware.com/resources/microsoft-azure/)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/concept-key-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-key-concepts.md
@@ -15,9 +15,9 @@ This article describes key advantages of Azure Defender for IoT.
## Rapid non-invasive deployment and passive monitoring
-Defender for IoT sensors connect to a SPAN port or network TAP and immediately begin collecting ICS network traffic via passive (agentless) monitoring. Deep packet inspection (DPI) is used to dissect traffic from both serial and Ethernet control network equipment. Defender for IoT has zero impact on OT networks because it isn't placed in the data path and doesn't actively scan OT devices.
+Defender for IoT sensors connects to switch SPAN (Mirror) ports, and network TAPs and immediately begin collecting ICS network traffic via passive (agentless) monitoring. Deep packet inspection (DPI) is used to dissect traffic from both serial and Ethernet control network equipment. Defender for IoT has zero impact on OT networks because it isn't placed in the data path and doesn't actively scan OT devices.
-To deliver instant snapshots of detailed device information, Defender for IoT sensor supplements passive monitoring with an optional active component. This component uses safe, vendor-approved commands to query both Windows and controller devices for device details, as often or as infrequently as you want.
+To deliver instant snapshots of detailed Windows device information, Defender for IoT sensor can be configured to supplement passive monitoring with an optional active component. This component uses safe, vendor-approved commands to query Windows devices for device details, as often or as infrequently as you want.
## Embedded knowledge of ICS protocols, devices, and applications
@@ -68,7 +68,7 @@ Defender for IoT provides a consolidated view of all your devices. It also provi
Defender for IoT enables the effective management of multiple deployments and a comprehensive unified view of the network. Defender for IoT optimizes alert handling and control of operational network security.
-The on-premises management console is a web-based administrative platform that lets you monitor and control the activities of global sensor installations. In addition to managing the data received from deployed sensors, the on-premises management console seamlessly integrates data from a variety of business resources: CMDBs, DNS, firewalls, Web APIs, and more.
+The on-premises management console is a web-based administrative platform that lets you monitor and control the activities of global sensor installations. In addition to managing the data received from deployed sensors, the on-premises management console seamlessly integrates data from various business resources: CMDBs, DNS, firewalls, Web APIs, and more.
:::image type="content" source="media/concept-air-gapped-networks/site-management-alert-screen.png" alt-text="On-premises management console display.":::
@@ -92,7 +92,7 @@ Integrations reduce complexity and eliminate IT and OT silos by integrating them
## Complete protocol support
-In addition to embedded protocol support, you can secure IoT and ICS devices running proprietary and custom protocols, or protocols that deviate from any standard. By using the Horizon Open Development Environment (ODE) SDK, developers can create dissector plug-ins that decode network traffic based on defined protocols. Services analyze traffic to provide complete monitoring, alerting, and reporting. Use Horizon to:
+In addition to embedded protocol support, you can secure IoT and ICS devices running proprietary and custom protocols, or protocols that deviate from any standard. By using the Horizon Open Development Environment (ODE) SDK, developers can create dissector plug-ins that decode network traffic based on defined protocols. Services analyzes traffic to provide complete monitoring, alerting, and reporting. Use Horizon to:
- Expand visibility and control without the need to upgrade to new versions.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-activate-and-set-up-your-sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-sensor.md
@@ -76,7 +76,7 @@ To sign in and activate:
:::image type="content" source="media/how-to-activate-and-set-up-your-sensor/azure-defender-for-iot-sensor-log-in-screen.png" alt-text="Azure Defender for IoT sensor.":::
-1. Enter the credentials defined during the sensor installation. If you purchased a preconfigured sensor from Arrow, generate a password first. For more information on password recovery, see [Investigate password failure at initial sign-in](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#investigate-password-failure-at-initial-sign-in).
+1. Enter the credentials defined during the sensor installation, or select the **Password recovery** option. If you purchased a preconfigured sensor from Arrow, generate a password first. For more information on password recovery, see [Investigate password failure at initial sign-in](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#investigate-password-failure-at-initial-sign-in).
1. After you sign in, the **Activation** dialog box opens. Select **Upload** and go to the activation file that you downloaded during sensor onboarding.
@@ -131,7 +131,7 @@ After your sign in, the Azure Defender for IoT console opens.
After your first sign-in, the Azure Defender for IoT sensor starts to monitor your network automatically. Network devices will appear in the device map and device inventory sections. Azure Defender for IoT will begin to detect and alert you on all security and operational incidents that occur in your network. You can then create reports and queries based on the detected information.
-Initially this activity is carried out in the learning mode, which instructs your sensor to learn your network's usual activity. For example, the sensor learns devices discovered in your network, protocols detected in the network, and file transfers that occur between specific devices. This activity becomes your network's baseline activity.
+Initially this activity is carried out in the Learning Mode, which instructs your sensor to learn your network's usual activity. For example, the sensor learns devices discovered in your network, protocols detected in the network, and file transfers that occur between specific devices. This activity becomes your network's baseline activity.
### Review and update basic system settings
@@ -147,7 +147,7 @@ Define the sensor's system settings. For example:
- If DHCP is in use, define legitimate DHCP ranges. -- Define integration with Active Directory and mail servers.
+- Define integration with Active Directory and mail server as appropriate.
### Disable learning mode
@@ -176,7 +176,7 @@ You access console tools from the side menu.
| --|--|--| | Dashboard | :::image type="icon" source="media/concept-sensor-console-overview/dashboard-icon-azure.png" border="false"::: | View an intuitive snapshot of the state of the network's security. | | Device map | :::image type="icon" source="media/concept-sensor-console-overview/asset-map-icon-azure.png" border="false"::: | View the network devices, device connections, and device properties in a map. Various zooms, highlight, and filter options are available to display your network. |
-| Device inventory | :::image type="icon" source="media/concept-sensor-console-overview/asset-inventory-icon-azure.png" border="false"::: | The device inventory displays an extensive range of device attributes that this sensor detects. Options are available to: <br /> - Filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details.|
+| Device inventory | :::image type="icon" source="media/concept-sensor-console-overview/asset-inventory-icon-azure.png" border="false"::: | The device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details.|
| Alerts | :::image type="icon" source="media/concept-sensor-console-overview/alerts-icon-azure.png" border="false"::: | Display alerts when policy violations occur, deviations from the baseline behavior occur, or any type of suspicious activity in the network is detected. | | Reports | :::image type="icon" source="media/concept-sensor-console-overview/reports-icon-azure.png" border="false"::: | View reports that are based on data-mining queries. |
@@ -191,7 +191,7 @@ You access console tools from the side menu.
| Window | Icon | Description | |||| | Data mining | :::image type="icon" source="media/concept-sensor-console-overview/data-mining-icon-azure.png" border="false"::: | Generate comprehensive and granular information about your network's devices at various layers. |
-| Trends and statistics | :::image type="icon" source="media/concept-sensor-console-overview/trends-and-statistics-icon-azure.jpg" border="false"::: | View trends and statistics in an extensive range of widgets. |
+| Investigation | :::image type="icon" source="media/concept-sensor-console-overview/trends-and-statistics-icon-azure.jpg" border="false"::: | View trends and statistics in an extensive range of widgets. |
| Risk Assessment | :::image type="icon" source="media/concept-sensor-console-overview/vulnerabilities-icon-azure.png" border="false"::: | Display the **Vulnerabilities** window. | **Admin**
@@ -199,7 +199,7 @@ You access console tools from the side menu.
| Window | Icon | Description | |||| | Users | :::image type="icon" source="media/concept-sensor-console-overview/users-icon-azure.png" border="false"::: | Define users and roles with various access levels. |
-| Forwarding | :::image type="icon" source="medi) for details. |
+| Forwarding | :::image type="icon" source="medi) for details. |
| System settings | :::image type="icon" source="media/concept-sensor-console-overview/system-settings-icon-azure.png" border="false"::: | Configure the system settings. For example, define DHCP settings, provide mail server details, or create port aliases. | | Import settings | :::image type="icon" source="medi) for details. |
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-create-data-mining-queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-data-mining-queries.md
@@ -50,11 +50,11 @@ Data mining queries that you create are dynamically updated each time you open t
You can use queries to handle an extensive range of security needs for various security teams: -- **SOC incident response**: Generate a report in real time to help deal with immediate incident response. For example, generate a report for a list of devices that might require patching.
+- **SOC incident response**: Generate a report in real time to help deal with immediate incident response. For example, Data Mining can generate a report for a list of devices that might require patching.
- **Forensics**: Generate a report based on historical data for investigative reports. -- **IT Network Integrity**: Generate a report that helps improve overall network security. For example, generate a report that lists devices with weak authentication credentials.
+- **IT Network Integrity**: Generate a report that helps improve overall network security. For example, generate a report can be generated that lists devices with weak authentication credentials.
- **Visibility**: Generate a report that covers all query items to view all baseline parameters of your network.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-create-trends-and-statistics-reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-trends-and-statistics-reports.md
@@ -21,19 +21,29 @@ You can create widget graphs and pie charts to gain insight into network trends
The dashboard consists of widgets that graphically describe the following types of information: - Traffic by port
+- Top traffic by port
- Channel bandwidth - Total bandwidth - Active TCP connection
+- Top Bandwidth by VLAN
- Devices: - New devices - Busy devices - Devices by vendor - Devices by OS
+ - Number of devices per VLAN
- Disconnected devices-- Connectivity drop by hours
+- Connectivity drops by hours
- Alerts for incidents by type - Database table access - Protocol dissection widgets
+- DELTAV
+ - DeltaV roc operations distribution
+ - DeltaV roc events by name
+ - DeltaV events by time
+- AMS
+ - AMS traffic by server port
+ - AMS traffic by command
- Ethernet and IP address: - Ethernet and IP address traffic by CIP service - Ethernet and IP address traffic by CIP class
@@ -44,6 +54,15 @@ The dashboard consists of widgets that graphically describe the following types
- Siemens S7: - S7 traffic by control function - S7 traffic by subfunction
+- VLAN
+ - Number of devices per VLAN
+ - Top bandwidth by VLAN
+- 60870-5-104
+ - IEC-60870 Traffic by ASDU
+- BACNET
+ - BACnet Services
+- DNP3
+ - DNP3 traffic by function
- SRTP: - SRTP traffic by service code - SRTP errors by day
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-import-device-information https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-import-device-information.md
@@ -155,7 +155,7 @@ To import the IP address, OS, and patch level:
- **Operating System**: Select from the drop-down list.
- - **Last Update**: Use the YYYY-MM-DD format.
+ - **Date of Last Update**: Use the YYYY-MM-DD format.
:::image type="content" source="media/how-to-import-device-information/last-update-screen.png" alt-text="The content on the screen.":::
@@ -167,7 +167,7 @@ To import the IP address, OS, and patch level:
To import the authorization status:
-1. Download and save the [authorized_devices.csv](https://cyberx-labs.zendesk.com/hc/en-us/articles/360008658272-How-To-Import-Data) file from the Defender for IoT help center. Verify that you saved the file as a CSV.
+1. Download and save the [authorized_devices - examples.csv](https://cyberx-labs.zendesk.com/hc/en-us/articles/360008658272-How-To-Import-Data) file from the Defender for IoT help center. Verify that you saved the file as a CSV.
2. Enter the information as:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-investigate-sensor-detections-in-a-device-inventory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-investigate-sensor-detections-in-a-device-inventory.md
@@ -27,18 +27,18 @@ The following attributes appear in the device inventory table.
| Parameter | Description | |--|--|
-| Name | The name of the device as the sensor discovered it. |
-| Type | The type of device. |
+| Name | The name of the device as the sensor discovered it, or as entered by the user. |
+| Type | The type of device as determined by the sensor, or as entered by the user. |
| Vendor | The name of the device's vendor, as defined in the MAC address. |
-| Operating System | The OS of the device. |
-| Firmware | The device's firmware. |
-| IP Address | The IP address of the device. |
+| Operating System | The OS of the device, if detected. |
+| Firmware version | The device's firmware, if detected. |
+| IP Address | The IP address of the device where defined. |
| VLAN | The VLAN of the device. For details about instructing the sensor to discover VLANs, see [Define VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names).(how-to-define-management-console-network-settings.md#define-vlan-names). | | MAC Address | The MAC address of the device. | | Protocols | The protocols that the device uses. | | Unacknowledged Alerts | The number of unacknowledged alerts associated with this device. | | Is Authorized | The authorization status defined by the user:<br />- **True**: The device has been authorized.<br />- **False**: The device has not been authorized. |
-| Is Known as Scanner | Defined as a scanning device by the user. |
+| Is Known as Scanner | Defined as a network scanning device by the user. |
| Is Programming device | Defined as an authorized programming device by the user. <br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. <br />- **False**: The device is not a programming device. | | Groups | The groups that this device participates in. | | Last Activity | The last activity that the device performed. |
@@ -106,7 +106,7 @@ When you switch to the map view, the filtered devices are highlighted and filter
## Learn Windows registry details
-In addition to learning OT devices, you can discover IT devices, including Microsoft Windows workstations and servers. These devices are also displayed in device inventory. After you learn devices, you can enrich the device inventory with detailed Windows information, such as:
+In addition to learning OT devices, you can discover Microsoft Windows workstations, and servers. These devices are also displayed in Device Inventory. After you learn devices, you can enrich the Device Inventory with detailed Windows information, such as:
- Windows version installed
@@ -212,7 +212,7 @@ To import:
## Export device inventory information
-You can export device inventory information to an Excel file. Imported information overwrites current information.
+You can export device inventory information to an Excel file.
To export a CSV file:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-individual-sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-individual-sensors.md
@@ -362,9 +362,9 @@ To change the name:
1. Delete the sensor from the **Sensor Management** window.
-1. Register with the new name.
+1. Re-register with the new name.
-1. Download and new activation file.
+1. Download the new activation file.
1. Sign in to the sensor and upload the new activation file.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-the-alert-event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-the-alert-event.md
@@ -40,7 +40,7 @@ In certain situations, you might not want a sensor to learn a detected event, or
After you carry out mitigation or investigation, you can instruct the sensor to hide the alert by selecting **Acknowledge**. If the event is detected again, the alert will be retriggered.
-To hide the alert:
+To clear the alert:
- Select **Acknowledge**.
@@ -65,9 +65,9 @@ In these situations, learning is not available. When learning can't be carried o
### What traffic is muted?
-A muted scenario includes the network devices and traffic detected for an event. The alert title describes the traffic that's being muted.
+A muted scenario includes the network devices, and traffic detected for an event. The alert title describes the traffic that's being muted.
-The device or devices being muted will be displayed as an image in the alert. If two devices are shown, the traffic between them will be muted.
+The device or devices being muted will be displayed as an image in the alert. If two devices are shown, the specific alerted traffic between them will be muted.
**Example 1**
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-track-sensor-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-track-sensor-activity.md
@@ -54,15 +54,15 @@ To filter the timeline:
2. Set event filters, as follows:
- - **Include Address**: Display specific event devices.
+ - **Include Address**: Display events for specific devices.
- - **Exclude Address**: Hide specific event devices.
+ - **Exclude Address**: Hide events for specific devices.
- - **Include Event Types**: Display specific event types.
+ - **Include Event Types**: Display specific events types.
- - **Exclude Event Types**: Hide specific event types.
+ - **Exclude Event Types**: Hide specific events types.
- - **Device Group**: Select a device group, as it was defined in the device map. Only the events of this group are presented.
+ - **Device Group**: Select a device group, as it was defined in the device map. Only the events from this group are presented.
3. Select **Clear All** to clear all the selected filters.
@@ -82,7 +82,7 @@ To filter the timeline:
- Select **PCAP File** to download the PCAP file (if it exists) that contains a packet capture of the whole network at a specific time.
- The PCAP file contains technical information that can help engineers determine exactly where the event was and what's happening there. You can analyze the PCAP file with a network protocol analyzer such as Wireshark, a free application.
+ The PCAP file contains technical information that can help network engineers determine the exact parameters of the event. You can analyze the PCAP file with a network protocol analyzer such as Wireshark, an open-source application.
## See also
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-view-alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-view-alerts.md
@@ -79,7 +79,7 @@ When none of the options are selected, all the alerts are displayed.
:::image type="content" source="media/how-to-work-with-alerts-sensor/alerts-security.png" alt-text="Security on the alerts screen.":::
-## Alert window options
+## Alert page options
Alert messages provide the following actions:
@@ -97,6 +97,8 @@ Alert messages provide the following actions:
- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/export-to-csv.png" border="false"::: to export the alert list to a CSV file and select the export option. Choose **Alert Export** for the regular export-to-CSV option. Or choose **Extended Alert Export** for the possibility to add separate rows for additional information about an alert in the CSV file.
+## Alert pop-up window options
+ - Select the :::image type="icon" source="media/how-to-work-with-alerts-sensor/export-to-pdf.png" border="false"::: icon to download an alert report as a PDF file. - Select the :::image type="icon" source="media/how-to-work-with-alerts-sensor/download-pcap.png" border="false"::: icon to download the PCAP file. The file is viewable with Wireshark, the free network protocol analyzer.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-work with-the-sensor-console-dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work with-the-sensor-console-dashboard.md
@@ -23,7 +23,7 @@ The dashboard allows you to quickly view the security status of your network. It
- Warnings -- The two gauges in the center of the page indicate the Packets per Second (PPS), and Unacknowledged Alerts (UA). **PPS** is the number of packets acknowledged by the system per second. **UA** is the number of alerts that have not been acknowledged yet.
+- The two indicators in the center of the page show the Packets per Second (PPS), and Unacknowledged Alerts (UA). **PPS** is the number of packets acknowledged by the system per second. **UA** is the number of alerts that have not been acknowledged yet.
- List of unacknowledged alerts with their description.
@@ -73,23 +73,23 @@ Select the down arrow **V** at the bottom of an alert box to display the alert e
:::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/extended-alert-screen.png" alt-text="Alert entry and devices information"::: -- Select the device or **Show Devices** to display the physical mode map. The subjected devices are highlighted.
+- Select the device to display the physical mode map. The subjected devices are highlighted.
+
+- Click anywhere in the alert box to display additional details regarding the alert. A popup will display similar to the one below
- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/excel-icon.png" alt-text="Excel"::: to export a CSV file about the alert. - Administrators and Security Analysts Only ΓÇö Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/approve-all-icon.png" alt-text="Acknowledge all"::: to **Acknowledge All** associated alerts. -- Select the alert entry to view the type and the description of the alert:- - Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/pdf-icon.png" alt-text="PDF":::to download an alert report as a PDF file. -- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/pin-icon.png" alt-text="Pin":::to pin or unpin the alert.
+- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/pin-icon.png" alt-text="Pin":::to pin or unpin the alert. Selecting to pin will add it to the **Pinned Alerts** window on the **Alerts** screen.
-- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/download-icon.png" alt-text="Download"::: to investigate the alert by downloading the PCAP file containing a network protocol analysis.
+- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/download-icon.png" alt-text="Download"::: to investigate the alert by downloading the related PCAP file containing a network protocol analysis.
-- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/cloud-download-icon.png" alt-text="Cloud"::: to download a filtered PCAP file that contains only the alert-relevant packets, thereby reducing output file size and allowing a more focused analysis. You can view it using [Wireshark](https://www.wireshark.org/).
+- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/cloud-download-icon.png" alt-text="Cloud"::: to download a related filtered PCAP file that contains only the alert-relevant packets, thereby reducing output file size and allowing a more focused analysis. You can view it using [Wireshark](https://www.wireshark.org/).
-- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/navigate-icon.png" alt-text="Navigation"::: to navigate to the event timeline at the time of the requested alert.
+- Select :::image type="content" source="media/how-to-work with-the-sensor-console-dashboard/navigate-icon.png" alt-text="Navigation"::: to navigate to the event timeline at the time of the requested alert. This allows you to assess other events that may be happening around the specific alert.
- Administrators and Security Analysts only - change the status of the alert from unacknowledged to acknowledged. Select Learn to approve detected activity.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-work-with-device-notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-device-notifications.md
@@ -55,19 +55,19 @@ The following table describes the notification event types you might receive, al
| Notification event types | Description | Responses | |--|--|--|
-| New IP addresses | A new IP address is associated with the device. Five scenarios might be detected: <br /><br /> An additional IP address was associated with a device. This device is also associated with an existing MAC address.<br /><br /> A new IP address was detected for a device that's using an existing MAC address. Currently the device does not communicate by using an IP address.<br /> <br /> A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> A new IP address was detected for a device that's using a virtual IP address. | **Set Additional IP to Device** (merge devices) <br /> <br />**Replace Existing IP** <br /> <br /> **Dismiss**<br /> Remove the notification. |
+| New IP detected | A new IP address is associated with the device. Five scenarios might be detected: <br /><br /> An additional IP address was associated with a device. This device is also associated with an existing MAC address.<br /><br /> A new IP address was detected for a device that's using an existing MAC address. Currently the device does not communicate by using an IP address.<br /> <br /> A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> A new IP address was detected for a device that's using a virtual IP address. | **Set Additional IP to Device** (merge devices) <br /> <br />**Replace Existing IP** <br /> <br /> **Dismiss**<br /> Remove the notification. |
| Inactive devices | Traffic was not detected on a device for more than 60 days. | **Delete** <br /> If this device is not part of your network, remove it. <br /><br />**Dismiss** <br /> Remove the notification if the device is part of your network. If the device is inactive (for example, because it's mistakenly disconnected from the network), dismiss the notification and reconnect the device. |
-| New OT device | A subnet includes an OT device that's not defined in an ICS subnet. <br /><br /> Each subnet that contains at least one OT device can be defined as an ICS subnet. This helps differentiate between OT and IT devices on the map. | **Set as ICS Subnet** <br /> <br /> **Dismiss** <br />Remove the notification if the device is not part of the subnet. |
+| New OT devices | A subnet includes an OT device that's not defined in an ICS subnet. <br /><br /> Each subnet that contains at least one OT device can be defined as an ICS subnet. This helps differentiate between OT and IT devices on the map. | **Set as ICS Subnet** <br /> <br /> **Dismiss** <br />Remove the notification if the device is not part of the subnet. |
| No subnets configured | No subnets are currently configured in your network. <br /><br /> Configure subnets for better representation in the map and the ability to differentiate between OT and IT devices. | **Open Subnets Configuration** and configure subnets. <br /><br />**Dismiss** <br /> Remove the notification. | | Operating system changes | One or more new operating systems have been associated with the device. | Select the name of the new OS that you want to associate with the device.<br /><br /> **Dismiss** <br /> Remove the notification. |
-| Subnets were detected | New subnets were discovered. | **Learn**<br />Automatically add the subnet.<br />**Open Subnet Configuration**<br />Add all missing subnet information.<br />**Dismiss**<br />Remove the notification. |
-| Device type change was detected | A new device type has been associated with the device. | **Set as {…}**<br />Associate the new type with the device.<br />**Dismiss**<br />Remove the notification. |
+| New subnets | New subnets were discovered. | **Learn**<br />Automatically add the subnet.<br />**Open Subnet Configuration**<br />Add all missing subnet information.<br />**Dismiss**<br />Remove the notification. |
+| Device type changes | A new device type has been associated with the device. | **Set as {…}**<br />Associate the new type with the device.<br />**Dismiss**<br />Remove the notification. |
## Respond to many notifications simultaneously You might need to handle several notifications simultaneously. For example: -- If IT did an OS upgrade to a large set of network servers, you can instruct the sensor to learn the new server versions for all upgraded servers. 
+- If IT did an OS upgrade to a large set of network servers, you can instruct the sensor to learn the new server versions for all upgraded servers.
- If a group of devices in a certain line was phased out and isn't active anymore, you can instruct the sensor to remove these devices from the console.
@@ -77,7 +77,7 @@ To display notifications and handle notifications:
1. Use the **filter by type, date range** option or the **Select All** option. Deselect notifications as required.
-2. Instruct the sensor to apply newly detected information to selected devices by selecting **LEARN**. Or, instruct the sensor to ignore newly detected information by selecting **DISMISS**. The number of notifications that you can simultaneously learn and dismiss, along with the number of notifications you must handle individually, is shown.
+2. Instruct the sensor to apply newly detected information to selected devices by selecting **LEARN**. Or, instruct the sensor to ignore newly detected information by selecting **DISMISS**. The number of notifications that you can simultaneously learn and dismiss, along with the number of notifications you must handle individually, is shown.
**New IPs** and **No Subnets** configured events can't be handled simultaneously. They require manual confirmation.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-work-with-the-sensor-device-map https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-the-sensor-device-map.md
@@ -58,7 +58,7 @@ The figure below shows a collapsed IT subnet with 27 IT network elements.
To enable the IT networks collapsing capability: -- In the System Setting window, ensure that the IT Networks capability is enabled.
+- In the **System Settings** window, ensure that the Toggle IT Networks Grouping capability is enabled.
:::image type="content" source="media/how-to-work-with-maps/shrunk-it-subnet-v2.png" alt-text="System Setting window":::
@@ -71,7 +71,7 @@ To expand an IT subnet:
:::image type="content" source="media/how-to-work-with-maps/subnet-list.png" alt-text="Subnets Configuration":::
-2. In the Edit Subnets Configuration window, clear the ICS subnet for each subnet that you want to define as an IT subnet. The IT subnets appear collapsed in the device map with the notifications for ICS devices, such as a controller or PLC, in IT networks.
+2. In the **Edit Subnets Configuration** window, clear the **ICS Subnet** checkbox for each subnet that you want to define as an IT subnet. The IT subnets appear collapsed in the device map with the notifications for ICS devices, such as a controller or PLC, in IT networks.
:::image type="content" source="media/how-to-work-with-maps/edit-config.png" alt-text="Edit Subnets Configuration":::
@@ -101,13 +101,13 @@ The collapse icon is updated with the updated number of expanded IT subnets.
## View or highlight device groups
-You can customize the map display based on device Groups. For example, groups of devices associated with a specific VLAN or subnet. Predefined groups are available and custom groups can be created.
+You can customize the map display based on device Groups. For example, groups of devices associated with a specific OT Protocol, VLAN, or subnet. Predefined groups are available and custom groups can be created.
View groups by: - **Highlighting:** Highlight the devices that belong to a specific group in blue.
- - **Filtering:** Only display devices on the map only that belong to a specific group.
+ - **Filtering:** Display only devices that belong to a specific group.
:::image type="content" source="media/how-to-work-with-maps/port-standard.png" alt-text="Standard view of your port":::
@@ -115,17 +115,18 @@ The following predefined groups are available:
| Group name | Description | |--|--|
-| **Known applications or non-standard ports (default)** | Devices that use reserved ports, such as TCP. Devices that use non-standard ports or ports that have not been assigned an alias. |
-| **OT protocols (default)** | Devices that handle the OT traffic. |
-| **Authorization (default)** | Devices that were discovered in the network during the learning process or were officially added to the network |
+| **Known applications** | Devices that use reserved ports, such as TCP. |
+| **non-standard ports (default)** | Devices that use non-standard ports or ports that have not been assigned an alias. |
+| **OT protocols (default)** | Devices that handle known OT traffic. |
+| **Authorization (default)** | Devices that were discovered in the network during the learning process or were officially authorized on the network. |
| **Device inventory filters** | Devices grouped according to the filters save in the Device Inventory table. | | **Polling intervals** | Devices grouped by polling intervals. The polling intervals are generated automatically according to cyclic channels, or periods. For example, 15.0 seconds, 3.0 seconds, 1.5 seconds, or any interval. Reviewing this information helps you learn if systems are polling too quickly or slowly. |
-| **Programming** | Engineering stations and programmed controllers |
+| **Programming** | Engineering stations, and programming machines. |
| **Subnets** | Devices that belong to a specific subnet. | | **VLAN** | Devices associated with a specific VLAN ID. |
-| **Connection between subnets** | Devices associated with cross subnet connection. |
+| **Cross subnet connections** | Devices that communicate from one subnet to another subnet. |
| **Pinned alerts** | Devices for which the user has pinned an alert. |
-| **Attack vector simulations** | Vulnerable devices detected in attack vector reports. In order to view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v2.png" alt-text="Add Attack Vector Simulations"::: |
+| **Attack vector simulations** | Vulnerable devices detected in attack vector reports. In order to view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v2.png" alt-text="Add Attack Vector Simulations":::. |
| **Last seen** | Devices grouped by the time frame they were last seen, for example: One hour, six hours, one day, seven days. | | **Not In Active Directory** | All non-PLC devices that are not communicating with the Active Directory. |
@@ -137,7 +138,7 @@ To highlight or filter devices:
3. From the Groups pane, select the group you want to highlight or filter devices.
-4. Select **Highlight** or **Filter**.
+4. Select **Highlight** or **Filter**. Toggle the same selection to remove the highlight, or filter.
## Define custom groups
@@ -150,20 +151,20 @@ To create a group:
1. Select **Devices** on the side menu. The Device Map is displayed.
-2. Select :::image type="content" source="media/how-to-work-with-maps/menu-icon.png" alt-text="Group Setting"::: to display the Groups settings.
+1. Select :::image type="content" source="media/how-to-work-with-maps/menu-icon.png" alt-text="Group Setting"::: to display the Groups settings.
-3. Select :::image type="content" source="media/how-to-work-with-maps/create-group-v2.png" alt-text="groups"::: to create a new custom group.
+1. Select :::image type="content" source="media/how-to-work-with-maps/create-group-v2.png" alt-text="groups"::: to create a new custom group.
:::image type="content" source="media/how-to-work-with-maps/custom-group-v2.png" alt-text="Create a custom group screen":::
-4. Add the name of the group, use up to 30 characters.
+1. Add the name of the group, use up to 30 characters.
-5. Select the relevant devices, as follows:
+1. Select the relevant devices, as follows:
- Add the devices from this menu by selecting them from the list (select on the arrow button),<br /> Or, <br /> - Add the devices from this menu by copying them from a selected group (select on the arrow button)
-6. Select **Add group**.
+1. Select **Add group** to add existing groups to custom groups.
### Add devices to a custom group
@@ -171,13 +172,13 @@ You can add devices to a custom group or create a new custom group and the devic
1. Right-click a device(s) on the map.
-2. Select **Add to group**.
+1. Select **Add to group**.
-3. Enter a group name in the group field and select +. The new group appears. If the group already exists, it will be added to the existing custom group.
+1. Enter a group name in the group field and select +. The new group appears. If the group already exists, it will be added to the existing custom group.
:::image type="content" source="media/how-to-work-with-maps/groups-section-v2.png" alt-text="Group name":::
-4. Add devices to a group by repeating steps 1-3.
+1. Add devices to a group by repeating steps 1-3.
## Map zoom views
@@ -314,12 +315,12 @@ The following information can be updated manually. Information manually entered
| Basic Information | The basic information needed. | | Name | The device name. <br /> By default, the sensor discovers the device name as it defined in the network. For example, a name defined in the DNS server. <br /> If no such names were defined, the device IP address appears in this field. <br /> You can change a device name manually. Give your devices meaningful names that reflect their functionality. | | Type | The device type detected by the sensor. <br /> For more information, see [View device types](#view-device-types). |
-| Vendor | The device vendor. |
-| Operating System | The device OS. |
+| Vendor | The device vendor. This is determined by the leading characters of the device MAC address. This field is read-only. |
+| Operating System | The device OS detected by the sensor. |
| Purdue Layer | The Purdue layer identified by the sensor for this device, including: <br /> - Automatic <br /> - Process Control <br /> - Supervisory <br /> - Enterprise | | Description | A free text field. <br /> Add more information about the device. | | Attributes | Any additional information that was discovered about the device during the learning period and does not belong to other categories, appears in the attributes section. <br /> The information is RO. |
-| Settings | You can manually change device settings to prevent false positives: <br /> - **Authorized Device**: During the learning period, all the devices discovered in the network are identified as authorized devices. When a device is discovered after the learning period, it appears as an unauthorized device by default. You can change this definition manually. <br /> - **Known as Scanner**: Enable this option if you know that this device is known as scanner and there is no need to alert you about it. <br /> - **Programming Device**: Enable this option if you know that this device is known as a programming device and there is no need to alert you about it. |
+| Settings | You can manually change device settings to prevent false positives: <br /> - **Authorized Device**: During the learning period, all the devices discovered in the network are identified as authorized devices. When a device is discovered after the learning period, it appears as an unauthorized device by default. You can change this definition manually. <br /> - **Known as Scanner**: Enable this option if you know that this device is known as scanner and there is no need to alert you about it. <br /> - **Programming Device**: Enable this option if you know that this device is known as a programming device and is used to make programming changes. Identifying it as a programming device will prevent alerts for programming changes originating from this asset. |
| Custom Groups | The custom groups in the device map in which this device participates. | | State | The security and the authorization status of the device: <br /> - The status is `Secured` when there are no alerts <br /> - When there are alerts about the device, the number of alerts is displayed <br /> - The status `Unauthorized` is displayed for devices that were added to the network after the learning period. You can manually define the device as `Authorized Device` in the settings <br /> - In case the address of this device is defined as a dynamic address, `DHCP` is added to the status. |
@@ -364,7 +365,7 @@ To view the device information:
2. Right-click a device and select **View Properties**. The Device Properties window is displayed.
-3. Select on the required alert at the bottom of this window to view detailed information about alerts for this device.
+3. Select on the required alert to view detailed information about alerts for this device.
### Backplane properties
@@ -424,7 +425,7 @@ Enhance forensics by displaying programming events carried out on your network d
You can display a programmed device and scroll through various programming changes carried out on it by other devices.
-View code that was added, changed, removed, or unchanged by the programming device. Search for programming changes based on file types, dates, or times of interest.
+View code that was added, changed, removed, or reloaded by the programming device. Search for programming changes based on file types, dates, or times of interest.
### When to review programming activity
@@ -438,7 +439,7 @@ You may need to review programming activity:
:::image type="content" source="media/how-to-work-with-maps/differences.png" alt-text="Programing Change Log":::
-Additional options let you:
+Other options let you:
- Mark events of interest with a star.
@@ -471,7 +472,7 @@ Alerts are triggered when unauthorized programming devices carry out programming
:::image type="content" source="media/how-to-work-with-maps/unauthorized.png" alt-text="Unauthorized programming alerts"::: > [!NOTE]
-> You can also view basic programming information in the Device Properties window and Device Inventory. See [Device Programming Information: Additional Locations](#device-programming-information-additional-locations) for details.
+> You can also view basic programming information in the Device Properties window and Device Inventory.
### Working in the programming timeline window
@@ -534,7 +535,7 @@ To compare:
5. The file selected from the Recent Events/Files pane always appears on the right.
-### Device programming information: additional locations
+### Device programming information: Other locations
In addition to reviewing details in the Programming Timeline, you can access programming information in the Device Properties window and the Device Inventory.
@@ -551,7 +552,7 @@ The sensor does not update or impact devices directly on the network. Changes ma
You may want to delete a device if the information learned is not relevant. For example,
- - A partner contractor at an engineering workstation connects to perform configuration updates. After the task is completed, the device should no longer be monitored.
+ - A partner contractor at an engineering workstation connects temporarily to perform configuration updates. After the task is completed, the device is removed.
- Due to changes in the network, some devices are no longer connected.
@@ -561,7 +562,7 @@ You may receive an alert indicating that the device is unresponsive if another d
The device will be removed from the Device Map, Device Inventory, and Data Mining reports. Other information, for example: information stored in Widgets will be maintained.
-The device must be active for at least 10 minutes to delete it.
+The device must be inactive for at least 10 minutes to delete it.
To delete a device from the device map:
@@ -571,15 +572,17 @@ To delete a device from the device map:
### Merge devices
-Under certain circumstances, you may need to merge devices. This may be required if the sensor discovered separate network entities that are one unique device. For example,
+Under certain circumstances, you may need to merge devices. This may be required if the sensor discovered separate network entities that are associated with one unique device. For example,
- - A PLC with four network cards
+ - A PLC with four network cards.
- - A Laptop with WIFI and physical card
+ - A Laptop with WIFI and physical card.
+
+ - A Workstation with two, or more network cards.
When merging, you instruct the sensor to combine the device properties of two devices into one. When you do this, the Device Properties window and sensor reports will be updated with the new device property details.
-For example, if you merge two devices with an IP address, both IP addresses will appear as separate interfaces in the Device Properties window. You can only merge authorized devices.
+For example, if you merge two devices, each with an IP address, both IP addresses will appear as separate interfaces in the Device Properties window. You can only merge authorized devices.
:::image type="content" source="media/how-to-work-with-maps/device-properties-v2.png" alt-text="Device Properties window":::
@@ -591,7 +594,7 @@ You cannot undo a device merge. If you mistakenly merged two devices, delete the
To merge devices:
-1. Select two devices and right-click one of them.
+1. Select two devices (shift-click), and then right-click one of them.
2. Select **Merge** to merge the devices. It can take up to 2 minutes complete the merge.
@@ -617,7 +620,7 @@ If you move a device on the map or manually change the device properties, the `N
#### Unauthorized devices - attack vectors and risk assessment reports
-Unauthorized devices are calculated included in Risk Assessment reports and Attack Vectors reports.
+Unauthorized devices are included in Risk Assessment reports and Attack Vectors reports.
- **Attack Vector Reports:** Devices marked as unauthorized are resolved in the Attack Vector as suspected rogue devices that might be a threat to the network.
dns https://docs.microsoft.com/en-us/azure/dns/dns-reverse-dns-hosting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-reverse-dns-hosting.md
@@ -40,7 +40,7 @@ The name of an IPv4 reverse lookup zone is based on the IP range that it represe
> > For example, for the IP range 192.0.2.128/26, you must use `128-26.2.0.192.in-addr.arpa` as the zone name instead of `128/26.2.0.192.in-addr.arpa`. >
-> Although the DNS standards support both methods, Azure DNS doesn't support DNS zone names that contain for forward slash (`/`) character.
+> Although the DNS standards support both methods, Azure DNS doesn't support DNS zone names that contain the forward slash (`/`) character.
The following example shows how to create a Class C reverse DNS zone named `2.0.192.in-addr.arpa` in Azure DNS via the Azure portal:
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-howto-set-global-reach-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-set-global-reach-portal.md
@@ -65,7 +65,7 @@ If the two circuits aren't in the same Azure subscription, you'll need authoriza
:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/create-authorization-expressroute-circuit.png" alt-text="Generate authorization key":::
- Make a note of the private peering ID of circuit 2 and the authorization key.
+ Make a note of the circuit resource ID of circuit 2 and the authorization key.
1. Select the **Azure private** peering configuration.
frontdoor https://docs.microsoft.com/en-us/azure/frontdoor/front-door-caching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-caching.md
@@ -19,13 +19,13 @@ The following document specifies behaviors for Front Door with routing rules tha
## Delivery of large files Azure Front Door delivers large files without a cap on file size. Front Door uses a technique called object chunking. When a large file is requested, Front Door retrieves smaller pieces of the file from the backend. After receiving a full or byte-range file request, the Front Door environment requests the file from the backend in chunks of 8 MB.
-</br>After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.
+After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.
-</br>For more information on the byte-range request, read [RFC 7233](https://web.archive.org/web/20171009165003/http://www.rfc-base.org/rfc-7233.html).
+For more information on the byte-range request, read [RFC 7233](https://web.archive.org/web/20171009165003/http://www.rfc-base.org/rfc-7233.html).
Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the backend. This optimization relies on the backend's ability to support byte-range requests. If the backend doesn't support byte-range requests, this optimization isn't effective. ## File compression
-Front Door can dynamically compress content on the edge, resulting in a smaller and faster response time to your clients. All files are eligible for compression. However, a file must be of a MIME type to be eligible for compression. Currently, Front Door doesn't allow this list to be changed. The current list is:</br>
+Front Door can dynamically compress content on the edge, resulting in a smaller and faster response time to your clients. In order for a file to be eligible for compression, caching must be enabled and the file must be of a MIME type to be eligible for compression. Currently, Front Door doesn't allow this list to be changed. The current list is:
- "application/eot" - "application/font" - "application/font-sfnt"
governance https://docs.microsoft.com/en-us/azure/governance/policy/how-to/guest-configuration-create-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-linux.md
@@ -252,7 +252,7 @@ Parameters of the `Publish-GuestConfigurationPackage` cmdlet:
The example below publishes the package to a storage container name 'guestconfiguration'. ```azurepowershell-interactive
-Publish-GuestConfigurationPackage -Path ./AuditBitlocker.zip -ResourceGroupName myResourceGroupName -StorageAccountName myStorageAccountName
+Publish-GuestConfigurationPackage -Path ./AuditFilePathExists/AuditFilePathExists.zip -ResourceGroupName myResourceGroupName -StorageAccountName myStorageAccountName
``` Once a Guest Configuration custom policy package has been created and uploaded, create the Guest
@@ -390,8 +390,9 @@ $PolicyParameterInfo = @(
# The hashtable also supports a property named 'AllowedValues' with an array of strings to limit input to a list
-New-GuestConfigurationPolicy
- -ContentUri 'https://storageaccountname.blob.core.windows.net/packages/AuditFilePathExists.zip?st=2019-07-01T00%3A00%3A00Z&se=2024-07-01T00%3A00%3A00Z&sp=rl&sv=2018-03-28&sr=b&sig=JdUf4nOCo8fvuflOoX%2FnGo4sXqVfP5BYXHzTl3%2BovJo%3D' `
+$uri = 'https://storageaccountname.blob.core.windows.net/packages/AuditFilePathExists.zip?st=2019-07-01T00%3A00%3A00Z&se=2024-07-01T00%3A00%3A00Z&sp=rl&sv=2018-03-28&sr=b&sig=JdUf4nOCo8fvuflOoX%2FnGo4sXqVfP5BYXHzTl3%2BovJo%3D'
+
+New-GuestConfigurationPolicy -ContentUri $uri `
-DisplayName 'Audit Linux file path.' ` -Description 'Audit that a file path exists on a Linux machine.' ` -Path './policies' `
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-high-availability-case-study https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-high-availability-case-study.md
@@ -66,7 +66,7 @@ The following image shows Contoso Retail's high availability disaster recovery a
**Hive and Spark** use [Active Primary ΓÇô On-Demand Secondary](hdinsight-business-continuity-architecture.md#apache-spark) replication models during normal times. The Hive replication process runs periodically and accompanies the Hive Azure SQL metastore and Hive storage account replication. The Spark storage account is periodically replicated using ADF DistCP. The transient nature of these clusters helps optimize costs. Replications are scheduled every 4 hours to arrive at an RPO that is well within the five-hour requirement.
-**HBase** replication uses the [Leader ΓÇô Follower](hdinsight-business-continuity-architecture.md#apache-hbase) model during normal times to ensure that data is always served regardless of the region and the RPO is zero.
+**HBase** replication uses the [Leader ΓÇô Follower](hdinsight-business-continuity-architecture.md#apache-hbase) model during normal times to ensure that data is always served regardless of the region and the RPO is very low.
If there is a regional failure in the primary region, the webpage and backend content are served from the secondary region for 5 hours with some degree of staleness. If the Azure service health dashboard does not indicate a recovery ETA in the five-hour window, the Contoso Retail will create the Hive and Spark transformation layer in the secondary region, and then point all upstream data sources to the secondary region. Making the secondary region writable would cause a failback process that involves replication back to the primary.
@@ -80,4 +80,4 @@ To learn more about the items discussed in this article, see:
* [Azure HDInsight business continuity](./hdinsight-business-continuity.md) * [Azure HDInsight business continuity architectures](./hdinsight-business-continuity-architecture.md)
-* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
+* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-store-data-blob.md
@@ -179,6 +179,7 @@ The Azure Blob Storage documentation includes quickstart sample code in several
The following quickstart samples use languages that are also supported by IoT Edge, so you could deploy them as IoT Edge modules alongside the blob storage module: * [.NET](../storage/blobs/storage-quickstart-blobs-dotnet.md)
+ * The Azure Blob Storage on Iot Edge module v1.4.0 and earlier are compatible with WindowsAzure.Storage 9.3.3 SDK and v1.4.1 also supports Azure.Storage.Blobs 12.8.0 SDK.
* [Python](../storage/blobs/storage-quickstart-blobs-python.md) * Versions before V2.1 of the Python SDK have a known issue where the module does not return blob creation time. Because of that issue, some methods like list blobs does not work. As a workaround, explicitly set the API version on the blob client to '2017-04-17'. Example: `block_blob_service._X_MS_VERSION = '2017-04-17'` * [Append Blob Sample](https://github.com/Azure/azure-storage-python/blob/master/samples/blob/append_blob_usage.py)
@@ -287,7 +288,7 @@ This Azure Blob Storage on IoT Edge module now provides integration with Event G
## Release Notes
-Here are the [release notes in docker hub](https://hub.docker.com/_/microsoft-azure-blob-storage) for this module
+Here are the [release notes in docker hub](https://hub.docker.com/_/microsoft-azure-blob-storage) for this module. You might be able to find more information related to bug fixes and remediation in the release notes of a specific version.
## Suggestions
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/tutorial-connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-connectivity.md
@@ -56,15 +56,15 @@ A device must authenticate with your hub before it can exchange any data with th
Sign in to the portal and navigate to your IoT hub. Then navigate to the **IoT Devices** tool:
-![IoT Devices tool](media/tutorial-connectivity/iot-devices-tool.png)
-To register a new device, click **+ Add**, set **Device ID** to **MyTestDevice**, and click **Save**:
+To register a new device, click **+ New**, set **Device ID** to **MyTestDevice**, and click **Save**.
-![Add new device](media/tutorial-connectivity/add-device.png)
-To retrieve the connection string for **MyTestDevice**, click on it in the list of devices and then copy the **Connection string-primary key** value. The connection string includes the *shared access key* for the device.
+To retrieve the connection string for **MyTestDevice**, click on it in the list of devices and then copy the **Primary Connection String** value. The connection string includes the *shared access key* for the device.
-![Retrieve device connection string](media/tutorial-connectivity/copy-connection-string.png)
To simulate **MyTestDevice** sending telemetry to your IoT hub, run the Node.js simulated device application you downloaded previously.
@@ -208,7 +208,7 @@ The simulated device prints a message to the console when it receives a direct m
![Simulated device receives direct method call](media/tutorial-connectivity/receive-method-call.png)
-When the simulated device successfully receives the direct method call, it sends an acknowledgement back to the hub:
+When the simulated device successfully receives the direct method call, it sends an acknowledgment back to the hub:
![Receive direct method acknowledgment](media/tutorial-connectivity/method-acknowledgement.png)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/secrets/tutorial-rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/tutorial-rotation.md
@@ -42,7 +42,7 @@ Below deployment link can be used, if you don't have existing Key Vault and SQL
[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-SQLPassword-Csharp%2Fmain%2FARM-Templates%2FInitial-Setup%2Fazuredeploy.json)
-1. Under **Resource group**, select **Create new**. Name the group **akvrotation**.
+1. Under **Resource group**, select **Create new**. Give group a name, we use **akvrotation** in this tutorial.
1. Under **Sql Admin Login**, type Sql administrator login name. 1. Select **Review + create**. 1. Select **Create**
@@ -250,4 +250,4 @@ When the application opens in the browser, you will see the **Generated Secret V
- Tutorial: [Rotation for resources with two sets of credentials](tutorial-rotation-dual.md) - Overview: [Monitoring Key Vault with Azure Event Grid](../general/event-grid-overview.md) - How to: [Receive email when a key vault secret changes](../general/event-grid-logicapps.md)-- [Azure Event Grid event schema for Azure Key Vault](../../event-grid/event-schema-key-vault.md)
+- [Azure Event Grid event schema for Azure Key Vault](../../event-grid/event-schema-key-vault.md)
lighthouse https://docs.microsoft.com/en-us/azure/lighthouse/how-to/monitor-at-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/monitor-at-scale.md
@@ -1,7 +1,7 @@
Title: Monitor delegated resources at scale description: Learn how to effectively use Azure Monitor Logs in a scalable way across the customer tenants you're managing. Previously updated : 01/07/2021 Last updated : 02/01/2021
@@ -44,7 +44,7 @@ After you've deployed your policies, data will be logged in the Log Analytics wo
You can view [alerts](../../azure-monitor/platform/alerts-overview.md) for the delegated subscriptions in customer tenants that your manage.
-TO refresh alerts automatically across multiple customers, use an [Azure Resource Graph](../../governance/resource-graph/overview.md) query to filter for alerts. You can pin the query to your dashboard and select all of the appropriate customers and subscriptions.
+To refresh alerts automatically across multiple customers, use an [Azure Resource Graph](../../governance/resource-graph/overview.md) query to filter for alerts. You can pin the query to your dashboard and select all of the appropriate customers and subscriptions.
The following example query will display severity 0 and 1 alerts, refreshing every 60 minutes.
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
@@ -601,10 +601,10 @@ addDays('<timestamp>', <days>, '<format>'?)
This example adds 10 days to the specified timestamp: ```
-addDays('2018-03-15T13:00:00Z', 10)
+addDays('2018-03-15T00:00:00Z', 10)
```
-And returns this result: `"2018-03-25T00:00:0000000Z"`
+And returns this result: `"2018-03-25T00:00:00.0000000Z"`
*Example 2*
@@ -614,7 +614,7 @@ This example subtracts five days from the specified timestamp:
addDays('2018-03-15T00:00:00Z', -5) ```
-And returns this result: `"2018-03-10T00:00:0000000Z"`
+And returns this result: `"2018-03-10T00:00:00.0000000Z"`
<a name="addHours"></a>
@@ -646,7 +646,7 @@ This example adds 10 hours to the specified timestamp:
addHours('2018-03-15T00:00:00Z', 10) ```
-And returns this result: `"2018-03-15T10:00:0000000Z"`
+And returns this result: `"2018-03-15T10:00:00.0000000Z"
*Example 2*
@@ -656,7 +656,7 @@ This example subtracts five hours from the specified timestamp:
addHours('2018-03-15T15:00:00Z', -5) ```
-And returns this result: `"2018-03-15T10:00:0000000Z"`
+And returns this result: `"2018-03-15T10:00:00.0000000Z"`
<a name="addMinutes"></a>
@@ -2339,7 +2339,7 @@ guid('<format>')
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*format*> | No | String | A single [format specifier](/dotnet/api/system.guid.tostring?view=netcore-3.1#system_guid_tostring_system_string_) for the returned GUID. By default, the format is "D", but you can use "N", "D", "B", "P", or "X". |
+| <*format*> | No | String | A single [format specifier](/dotnet/api/system.guid.tostring#system_guid_tostring_system_string_) for the returned GUID. By default, the format is "D", but you can use "N", "D", "B", "P", or "X". |
||||| | Return value | Type | Description |
@@ -4124,7 +4124,7 @@ This example subtracts one day from this timestamp:
subtractFromTime('2018-01-02T00:00:00Z', 1, 'Day') ```
-And returns this result: `"2018-01-01T00:00:00:0000000Z"`
+And returns this result: `"2018-01-01T00:00:00.0000000Z"`
*Example 2*
@@ -4177,7 +4177,7 @@ And return these results:
### ticks
-Returns the number of ticks, which are 100-nanosecond intervals, since January 1, 0001 12:00:00 midnight (or DateTime.Ticks in C#) up to the specified timestamp. For more information, see this topic: [DateTime.Ticks Property (System)](/dotnet/api/system.datetime.ticks?view=netframework-4.7.2#remarks).
+Returns the number of ticks, which are 100-nanosecond intervals, since January 1, 0001 12:00:00 midnight (or DateTime.Ticks in C#) up to the specified timestamp. For more information, see this topic: [DateTime.Ticks Property (System)](/dotnet/api/system.datetime.ticks).
``` ticks('<timestamp>')
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-access-azureml-behind-firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
@@ -28,15 +28,22 @@ When using Azure Firewall, use __destination network address translation (DNAT)_
If you use an Azure Machine Learning __compute instance__ or __compute cluster__, add a [user-defined routes (UDRs)](../virtual-network/virtual-networks-udr-overview.md) for the subnet that contains the Azure Machine Learning resources. This route forces traffic __from__ the IP addresses of the `BatchNodeManagement` and `AzureMachineLearning` resources to the public IP of your compute instance and compute cluster.
-These UDRs enable the Batch service to communicate with compute nodes for task scheduling. Also add the IP address for the Azure Machine Learning service where the resources exist, as this is required for access to Compute Instances. To get a list of IP addresses of the Batch service and Azure Machine Learning service, use one of the following methods:
+These UDRs enable the Batch service to communicate with compute nodes for task scheduling. Also add the IP address for the Azure Machine Learning service, as this is required for access to Compute Instances. When adding the IP for the Azure Machine Learning service, you must add the IP for both the __primary and secondary__ Azure regions. The primary region being the one where your workspace is located.
+
+To find the secondary region, see the [Ensure business continuity & disaster recovery using Azure Paired Regions](../best-practices-availability-paired-regions.md#azure-regional-pairs). For example, if your Azure Machine Learning service is in East US 2, the secondary region is Central US.
+
+To get a list of IP addresses of the Batch service and Azure Machine Learning service, use one of the following methods:
* Download the [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) and search the file for `BatchNodeManagement.<region>` and `AzureMachineLearning.<region>`, where `<region>` is your Azure region.
-* Use the [Azure CLI](/cli/azure/install-azure-cli?preserve-view=true&view=azure-cli-latest) to download the information. The following example downloads the IP address information and filters out the information for the East US 2 region:
+* Use the [Azure CLI](/cli/azure/install-azure-cli?preserve-view=true&view=azure-cli-latest) to download the information. The following example downloads the IP address information and filters out the information for the East US 2 region (primary) and Central US region (secondary):
```azurecli-interactive az network list-service-tags -l "East US 2" --query "values[?starts_with(id, 'Batch')] | [?properties.region=='eastus2']"
+ # Get primary region IPs
az network list-service-tags -l "East US 2" --query "values[?starts_with(id, 'AzureMachineLearning')] | [?properties.region=='eastus2']"
+ # Get secondary region IPs
+ az network list-service-tags -l "Central US" --query "values[?starts_with(id, 'AzureMachineLearning')] | [?properties.region=='centralus']"
``` > [!TIP]
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
@@ -419,7 +419,7 @@ For general information on how model explanations and feature importance can be
* Attribute errors: Ex. `AttributeError: 'SimpleImputer' object has no attribute 'add_indicator` To work around this issue, take either of the following two steps depending on your `AutoML` SDK training version:
- * If your `AutoML` SDK training version is greater than 1.13.0, you need `pandas == 0.25.1` and `sckit-learn==0.22.1`. If there is a version mismatch, upgrade scikit-learn and/or pandas to correct version as shown below:
+ * If your `AutoML` SDK training version is greater than 1.13.0, you need `pandas == 0.25.1` and `scikit-learn==0.22.1`. If there is a version mismatch, upgrade scikit-learn and/or pandas to correct version as shown below:
```bash pip install --upgrade pandas==0.25.1
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-training-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
@@ -160,15 +160,22 @@ There are two ways that you can accomplish this:
* Use a [Virtual Network NAT](../virtual-network/nat-overview.md). A NAT gateway provides outbound internet connectivity for one or more subnets in your virtual network. For information, see [Designing virtual networks with NAT gateway resources](../virtual-network/nat-gateway-resource.md).
-* Add [user-defined routes (UDRs)](../virtual-network/virtual-networks-udr-overview.md) to the subnet that contains the compute resource. Establish a UDR for each IP address that's used by the Azure Batch service in the region where your resources exist. These UDRs enable the Batch service to communicate with compute nodes for task scheduling. Also add the IP address for the Azure Machine Learning service where the resources exist, as this is required for access to Compute Instances. To get a list of IP addresses of the Batch service and Azure Machine Learning service, use one of the following methods:
+* Add [user-defined routes (UDRs)](../virtual-network/virtual-networks-udr-overview.md) to the subnet that contains the compute resource. Establish a UDR for each IP address that's used by the Azure Batch service in the region where your resources exist. These UDRs enable the Batch service to communicate with compute nodes for task scheduling. Also add the IP address for the Azure Machine Learning service, as this is required for access to Compute Instances. When adding the IP for the Azure Machine Learning service, you must add the IP for both the __primary and secondary__ Azure regions. The primary region being the one where your workspace is located.
+
+ To find the secondary region, see the [Ensure business continuity & disaster recovery using Azure Paired Regions](../best-practices-availability-paired-regions.md#azure-regional-pairs). For example, if your Azure Machine Learning service is in East US 2, the secondary region is Central US.
+
+ To get a list of IP addresses of the Batch service and Azure Machine Learning service, use one of the following methods:
* Download the [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) and search the file for `BatchNodeManagement.<region>` and `AzureMachineLearning.<region>`, where `<region>` is your Azure region.
- * Use the [Azure CLI](/cli/azure/install-azure-cli?preserve-view=true&view=azure-cli-latest) to download the information. The following example downloads the IP address information and filters out the information for the East US 2 region:
+ * Use the [Azure CLI](/cli/azure/install-azure-cli?preserve-view=true&view=azure-cli-latest) to download the information. The following example downloads the IP address information and filters out the information for the East US 2 region (primary) and Central US region (secondary):
```azurecli-interactive az network list-service-tags -l "East US 2" --query "values[?starts_with(id, 'Batch')] | [?properties.region=='eastus2']"
+ # Get primary region IPs
az network list-service-tags -l "East US 2" --query "values[?starts_with(id, 'AzureMachineLearning')] | [?properties.region=='eastus2']"
+ # Get secondary region IPs
+ az network list-service-tags -l "Central US" --query "values[?starts_with(id, 'AzureMachineLearning')] | [?properties.region=='centralus']"
``` > [!TIP]
@@ -188,7 +195,6 @@ There are two ways that you can accomplish this:
For more information, see [Create an Azure Batch pool in a virtual network](../batch/batch-virtual-network.md#user-defined-routes-for-forced-tunneling). - ### Create a compute cluster in a virtual network To create a Machine Learning Compute cluster, use the following steps:
marketplace https://docs.microsoft.com/en-us/azure/marketplace/azure-partner-customer-usage-attribution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-partner-customer-usage-attribution.md
@@ -243,10 +243,8 @@ foreach ($deployment in $deployments){
} ```- ## Report-
-You can find the report for customer usage attribution in your Partner Center dashboard ([https://partner.microsoft.com/dashboard/partnerinsights/analytics/overview](https://partner.microsoft.com/dashboard/partnerinsights/analytics/overview)). In order to see the report, you have to use your Partner Center credentials to sign in. If you encounter any issues with report or sign in, create a [support request](#get-support).
+Reporting for Azure usage tracked via customer usage attribution is not available today for ISV partners. Adding reporting to the Commercial Marketplace Program in Partner Center to cover customer usage attribution is targeted for the second half of 2021.
## Notify your customers
marketplace https://docs.microsoft.com/en-us/azure/marketplace/business-applications-isv-program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/business-applications-isv-program.md
@@ -75,6 +75,6 @@ Ask your Account Manager or contact [Microsoft Partner Support](https://aka.ms/m
- [Business Applications for ISVs](https://partner.microsoft.com/solutions/business-applications/isv-overview) (online article) - [Overview of the New Program for Business Applications ISVs](https://aka.ms/BizAppsISVProgram) (PDF)
+- [Business Applications ISV Connect Program FAQ](https://assetsprod.microsoft.com/faq-using-partner-center-isv-connect.pdf) (PDF)
- [Upcoming program for Business Applications ISVs](https://cloudblogs.microsoft.com/dynamics365/bdm/2019/04/17/upcoming-program-for-business-applications-isvs/) (blog post) - [ISV Connect Program Policies](https://aka.ms/bizappsisvpolicies) (PDF)
-<!-
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/continuous-video-recording-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/continuous-video-recording-tutorial.md
@@ -103,8 +103,8 @@ You'll need the files for these steps:
AAD_TENANT_ID="<AAD Tenant ID>" AAD_SERVICE_PRINCIPAL_ID="<AAD SERVICE_PRINCIPAL ID>" AAD_SERVICE_PRINCIPAL_SECRET="<AAD SERVICE_PRINCIPAL ID>"
- INPUT_VIDEO_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
- OUTPUT_VIDEO_FOLDER_ON_DEVICE="/home/lvaadmin/samples/output"
+ VIDEO_INPUT_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
+ VIDEO_OUTPUT_FOLDER_ON_DEVICE="/home/lvaadmin/samples/output"
APPDATA_FOLDER_ON_DEVICE="/var/local/mediaservices" CONTAINER_REGISTRY_USERNAME_myacr="<your container registry username>" CONTAINER_REGISTRY_PASSWORD_myacr="<your container registry username>"
@@ -178,7 +178,7 @@ When you use the Live Video Analytics on IoT Edge module to record the live vide
1. Go to src/cloud-to-device-console-app/operations.json. 1. Under the **GraphTopologySet** node, edit the following:
- `"topologyUrl" : "https://raw.githubusercontent.com/Azure/live-video-analytics/master/MediaGraph/topologies/cvr-asset/topology.json" `
+ `"topologyUrl" : "https://raw.githubusercontent.com/Azure/live-video-analytics/master/MediaGraph/topologies/cvr-asset/2.0/topology.json" `
1. Next, under the **GraphInstanceSet** and **GraphTopologyDelete** nodes, ensure that the value of **topologyName** matches the value of the **name** property in the previous graph topology: `"topologyName" : "CVRToAMSAsset"`
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/event-based-video-recording-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/event-based-video-recording-tutorial.md
@@ -115,8 +115,8 @@ You'll need the files for these steps.
AAD_TENANT_ID="<AAD Tenant ID>" AAD_SERVICE_PRINCIPAL_ID="<AAD SERVICE_PRINCIPAL ID>" AAD_SERVICE_PRINCIPAL_SECRET="<AAD SERVICE_PRINCIPAL ID>"
- INPUT_VIDEO_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
- OUTPUT_VIDEO_FOLDER_ON_DEVICE="/home/lvaadmin/samples/output"
+ VIDEO_INPUT_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
+ VIDEO_OUTPUT_FOLDER_ON_DEVICE="/home/lvaadmin/samples/output"
APPDATA_FOLDER_ON_DEVICE="/var/local/mediaservices" CONTAINER_REGISTRY_USERNAME_myacr="<your container registry username>" CONTAINER_REGISTRY_PASSWORD_myacr="<your container registry username>"
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/get-started-detect-motion-emit-events-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/get-started-detect-motion-emit-events-quickstart.md
@@ -43,6 +43,8 @@ This tutorial requires the following Azure resources:
For this quickstart, we recommend that you use the [Live Video Analytics resources setup script](https://github.com/Azure/live-video-analytics/tree/master/edge/setup) to deploy the required resources in your Azure subscription. To do so, follow these steps: 1. Go to [Azure portal](https://portal.azure.com) and select the Cloud Shell icon.
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstarts/cloud-shell.png" alt-text="Cloud Shell":::
1. If you're using Cloud Shell for the first time, you'll be prompted to select a subscription to create a storage account and a Microsoft Azure Files share. Select **Create storage** to create a storage account for your Cloud Shell session information. This storage account is separate from the account that the script will create to use with your Azure Media Services account. 1. In the drop-down menu on the left side of the Cloud Shell window, select **Bash** as your environment.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/analyze-live-video-your-http-model-quickstart/csharp/interpret-results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/analyze-live-video-your-http-model-quickstart/csharp/interpret-results.md
@@ -74,20 +74,12 @@ In the following example, two cars were detected in the same video frame, with v
"type": "entity" } ]
- },
- "applicationProperties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/mediaservices/hubname",
- "subject": "/graphInstances/GRAPHINSTANCENAMEHERE/processors/inferenceClient",
- "eventType": "Microsoft.Media.Graph.Analytics.Inference",
- "eventTime": "2020-04-23T06:37:16.097Z"
} } ``` In the messages, notice the following details:
-* In `applicationProperties`, `subject` references the node in the graph topology from which the message was generated.
-* In `applicationProperties`, `eventType` indicates that this event is an analytics event.
* The `eventTime` value is the time when the event occurred. * The `body` section contains data about the analytics event. In this case, the event is an inference event, so the body contains `inferences` data. * The `inferences` section indicates that the `type` is `entity`. This section includes additional data about the entity.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/analyze-live-video-your-http-model-quickstart/python/create-deploy-media-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/analyze-live-video-your-http-model-quickstart/python/create-deploy-media-graph.md
@@ -8,16 +8,13 @@ As part of the prerequisites, you downloaded the sample code to a folder. Follow
1. Go to the *src/cloud-to-device-console-app* folder. Here you see your *appsettings.json* file and a few other files:
- * ***c2d-console-app.csproj*** - The project file for Visual Studio Code.
- * ***operations.json*** - A list of the operations that you want the program to run.
- * ***Program.cs*** - The sample program code. This code:
+ * operations.json - A list of the operations that you want the program to run.
+ * main.py - The sample program code. This code:
* Loads the app settings.
- * Invokes direct methods that the Live Video Analytics on IoT Edge module exposes. You can use the module to analyze live video streams by invoking its [direct methods](../../../direct-methods.md).
+ * Invokes direct methods that the Live Video Analytics on IoT Edge module exposes. You can use the module to analyze live video streams by invoking its direct methods.
* Pauses so that you can examine the program's output in the **TERMINAL** window and examine the events that were generated by the module in the **OUTPUT** window.
- * Invokes direct methods to clean up resources.
--
+ * Invokes direct methods to clean up resources.
1. Edit the *operations.json* file: * Change the link to the graph topology:
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/analyze-live-video-your-http-model-quickstart/python/interpret-results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/analyze-live-video-your-http-model-quickstart/python/interpret-results.md
@@ -74,20 +74,12 @@ In the following example, two cars were detected in the same video frame, with v
"type": "entity" } ]
- },
- "applicationProperties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/mediaservices/hubname",
- "subject": "/graphInstances/GRAPHINSTANCENAMEHERE/processors/inferenceClient",
- "eventType": "Microsoft.Media.Graph.Analytics.Inference",
- "eventTime": "2020-04-23T06:37:16.097Z"
} } ``` In the messages, notice the following details:
-* In `applicationProperties`, `subject` references the node in the graph topology from which the message was generated.
-* In `applicationProperties`, `eventType` indicates that this event is an analytics event.
* The `eventTime` value is the time when the event occurred. * The `body` section contains data about the analytics event. In this case, the event is an inference event, so the body contains `inferences` data. * The `inferences` section indicates that the `type` is `entity`. This section includes additional data about the entity.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/csharp/interpret-results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/csharp/interpret-results.md
@@ -53,21 +53,12 @@ Here's an example of this message:
} } ]
- },
- "applicationProperties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/mediaservices/hubname",
- "subject": "/graphInstances/GRAPHINSTANCENAME/processors/md",
- "eventType": "Microsoft.Media.Graph.Analytics.Inference",
- "eventTime": "2020-04-17T20:26:32.7010000Z",
- "dataVersion": "1.0"
- }
+ }
} ``` In this example:
-* In `applicationProperties`, `subject` references the node in the media graph from which the message was generated. In this case, the message originates from the motion detection processor node.
-* In `applicationProperties`, `eventType` indicates that this event is an analytics event.
* The `eventTime` value is the time when the event occurred. * The `body` value is data about the analytics event. In this case, the event is an inference event, so the body contains `timestamp` and `inferences` data. * The `inferences` data indicates that the `type` is `motion`. It has additional data about that `motion` event.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/csharp/prepare-monitor-events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/csharp/prepare-monitor-events.md
@@ -5,3 +5,9 @@ You'll use the Live Video Analytics on IoT Edge module to detect motion in the i
1. Right-click **lva-sample-device** and select **Start Monitoring Built-in Event Endpoint**. ![Start monitoring a built-in event endpoint](../../../media/quickstarts/start-monitoring-iothub-events.png)+
+> [!NOTE]
+> You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
+ ```
+ Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
+ ```
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/csharp/setup-azure-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/csharp/setup-azure-resources.md
@@ -8,6 +8,8 @@ This tutorial requires the following Azure resources:
For this quickstart, we recommend that you use the [Live Video Analytics resources setup script](https://github.com/Azure/live-video-analytics/tree/master/edge/setup) to deploy the required resources in your Azure subscription. To do so, follow these steps: 1. Open [Azure Cloud Shell](https://ms.portal.azure.com/#cloudshell/).
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="../../../media/quickstarts/cloud-shell.png" alt-text="Cloud Shell":::
1. If you're using Cloud Shell for the first time, you'll be prompted to select a subscription to create a storage account and a Microsoft Azure Files share. Select **Create storage** to create a storage account for your Cloud Shell session information. This storage account is separate from the account the script will create to use with your Azure Media Services account. 1. In the drop-down menu on the left side of the Cloud Shell window, select **Bash** as your environment.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/csharp/setup-development-environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/csharp/setup-development-environment.md
@@ -23,8 +23,8 @@
AAD_TENANT_ID="<AAD Tenant ID>" AAD_SERVICE_PRINCIPAL_ID="<AAD SERVICE_PRINCIPAL ID>" AAD_SERVICE_PRINCIPAL_SECRET="<AAD SERVICE_PRINCIPAL ID>"
- INPUT_VIDEO_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
- OUTPUT_VIDEO_FOLDER_ON_DEVICE="/var/media"
+ VIDEO_INPUT_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
+ VIDEO_OUTPUT_FOLDER_ON_DEVICE="/var/media"
APPDATA_FOLDER_ON_DEVICE="/var/local/mediaservices" CONTAINER_REGISTRY_USERNAME_myacr="<your container registry username>" CONTAINER_REGISTRY_PASSWORD_myacr="<your container registry password>"
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/python/interpret-results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/python/interpret-results.md
@@ -53,21 +53,12 @@ Here's an example of this message:
} } ]
- },
- "applicationProperties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/mediaservices/hubname",
- "subject": "/graphInstances/GRAPHINSTANCENAME/processors/md",
- "eventType": "Microsoft.Media.Graph.Analytics.Inference",
- "eventTime": "2020-04-17T20:26:32.7010000Z",
- "dataVersion": "1.0"
- }
+ }
} ``` In this example:
-* In `applicationProperties`, `subject` references the node in the media graph from which the message was generated. In this case, the message originates from the motion detection processor node.
-* In `applicationProperties`, `eventType` indicates that this event is an analytics event.
* The `eventTime` value is the time when the event occurred. * The `body` value is data about the analytics event. In this case, the event is an inference event, so the body contains `timestamp` and `inferences` data. * The `inferences` data indicates that the `type` is `motion`. It has additional data about that `motion` event.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/python/prepare-monitor-events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/python/prepare-monitor-events.md
@@ -5,3 +5,9 @@ You'll use the Live Video Analytics on IoT Edge module to detect motion in the i
1. Right-click **lva-sample-device** and select **Start Monitoring Built-in Event Endpoint**. ![Start monitoring a built-in event endpoint](../../../media/quickstarts/start-monitoring-iothub-events.png)+
+> [!NOTE]
+> You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
+ ```
+ Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
+ ```
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/python/setup-azure-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/python/setup-azure-resources.md
@@ -8,6 +8,8 @@ This tutorial requires the following Azure resources:
For this quickstart, we recommend that you use the [Live Video Analytics resources setup script](https://github.com/Azure/live-video-analytics/tree/master/edge/setup) to deploy the required resources in your Azure subscription. To do so, follow these steps: 1. Open [Azure Cloud Shell](https://ms.portal.azure.com/#cloudshell/).
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="../../../media/quickstarts/cloud-shell.png" alt-text="Cloud Shell":::
1. If you're using Cloud Shell for the first time, you'll be prompted to select a subscription to create a storage account and a Microsoft Azure Files share. Select **Create storage** to create a storage account for your Cloud Shell session information. This storage account is separate from the account the script will create to use with your Azure Media Services account. 1. In the drop-down menu on the left side of the Cloud Shell window, select **Bash** as your environment.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/python/setup-development-environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-emit-events-quickstart/python/setup-development-environment.md
@@ -23,8 +23,8 @@
AAD_TENANT_ID="<AAD Tenant ID>" AAD_SERVICE_PRINCIPAL_ID="<AAD SERVICE_PRINCIPAL ID>" AAD_SERVICE_PRINCIPAL_SECRET="<AAD SERVICE_PRINCIPAL ID>"
- INPUT_VIDEO_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
- OUTPUT_VIDEO_FOLDER_ON_DEVICE="/var/media"
+ VIDEO_INPUT_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
+ VIDEO_OUTPUT_FOLDER_ON_DEVICE="/var/media"
APPDATA_FOLDER_ON_DEVICE="/var/local/mediaservices" CONTAINER_REGISTRY_USERNAME_myacr="<your container registry username>" CONTAINER_REGISTRY_PASSWORD_myacr="<your container registry password>"
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-record-video-clips-edge-devices-quickstart/csharp/play-mp4-clip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-record-video-clips-edge-devices-quickstart/csharp/play-mp4-clip.md
@@ -1,4 +1,4 @@
-The MP4 files are written to a directory on the edge device that you configured in the *.env* file by using the OUTPUT_VIDEO_FOLDER_ON_DEVICE key. If you used the default value, then the results should be in the */var/media/* folder.
+The MP4 files are written to a directory on the edge device that you configured in the *.env* file by using the VIDEO_OUTPUT_FOLDER_ON_DEVICE key. If you used the default value, then the results should be in the */var/media/* folder.
To play the MP4 clip:
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-record-video-clips-edge-devices-quickstart/csharp/prepare-monitoring-events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-record-video-clips-edge-devices-quickstart/csharp/prepare-monitoring-events.md
@@ -2,3 +2,8 @@ Make sure you've completed the steps to [Prepare to monitor events](../../../det
![Start Monitoring Built-in Event Endpoint](../../../media/quickstarts/start-monitoring-iothub-events.png)
+> [!NOTE]
+> You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
+ ```
+ Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
+ ```
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-record-video-clips-edge-devices-quickstart/python/play-mp4-clip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-record-video-clips-edge-devices-quickstart/python/play-mp4-clip.md
@@ -1,4 +1,4 @@
-The MP4 files are written to a directory on the edge device that you configured in the *.env* file by using the OUTPUT_VIDEO_FOLDER_ON_DEVICE key. If you used the default value, then the results should be in the */var/media/* folder.
+The MP4 files are written to a directory on the edge device that you configured in the *.env* file by using the VIDEO_OUTPUT_FOLDER_ON_DEVICE key. If you used the default value, then the results should be in the */var/media/* folder.
To play the MP4 clip:
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/includes/detect-motion-record-video-clips-edge-devices-quickstart/python/prepare-monitoring-events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/includes/detect-motion-record-video-clips-edge-devices-quickstart/python/prepare-monitoring-events.md
@@ -2,3 +2,8 @@ Make sure you've completed the steps to [Prepare to monitor events](../../../det
![Start Monitoring Built-in Event Endpoint](../../../media/quickstarts/start-monitoring-iothub-events.png)
+> [!NOTE]
+> You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
+ ```
+ Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
+ ```
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/monitoring-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/monitoring-logging.md
@@ -249,14 +249,14 @@ Follow these steps to enable the collection of metrics from the Live Video Analy
urls = ["http://edgeHub:9600/metrics", "http://edgeAgent:9600/metrics", "http://{LVA_EDGE_MODULE_NAME}:9600/metrics"] [[outputs.azure_monitor]]
- namespace_prefix = ""
+ namespace_prefix = "lvaEdge"
region = "westus" resource_id = "/subscriptions/{SUBSCRIPTON_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Devices/IotHubs/{IOT_HUB_NAME}" ``` > [!IMPORTANT] > Be sure to replace the variables in the .toml file. The variables are denoted by braces (`{}`).
-1. In the same folder, create a `.dockerfile` that contains the following commands:
+1. In the same folder, create a Dockerfile that contains the following commands:
``` FROM telegraf:1.15.3-alpine COPY telegraf.toml /etc/telegraf/telegraf.conf
@@ -300,12 +300,27 @@ Follow these steps to enable the collection of metrics from the Live Video Analy
`AZURE_CLIENT_SECRET`: Specifies the app secret to use. >[!TIP]
- > You can give the service principal the **Monitoring Metrics Publisher** role.
+ > You can give the service principal the **Monitoring Metrics Publisher** role. Follow the steps in **[Create service principal](https://docs.microsoft.com/azure/azure-arc/data/upload-metrics-and-logs-to-azure-monitor?pivots=client-operating-system-macos-and-linux#create-service-principal)** to create the service principal and assign the role .
1. After the modules are deployed, metrics will appear in Azure Monitor under a single namespace. Metric names will match the ones emitted by Prometheus. In this case, in the Azure portal, go to the IoT hub and select **Metrics** in the left pane. You should see the metrics there.
+Using Prometheus along with [Log Analytics](https://docs.microsoft.com/azure/azure-monitor/log-query/log-analytics-tutorial), you can generate and [monitor metrics](https://docs.microsoft.com/azure/azure-monitor/platform/metrics-supported) such as used CPUPercent, MemoryUsedPercent, etc. Using Kusto query language, you can write queries as below and get CPU percentage used by the IoT edge modules.
+```kusto
+let cpu_metrics = promMetrics_CL
+| where Name_s == "edgeAgent_used_cpu_percent"
+| extend dimensions = parse_json(Tags_s)
+| extend module_name = tostring(dimensions.module_name)
+| where module_name in ("lvaEdge","yolov3","tinyyolov3")
+| summarize cpu_percent = avg(Value_d) by bin(TimeGenerated, 5s), module_name;
+cpu_metrics
+| summarize cpu_percent = sum(cpu_percent) by TimeGenerated
+| extend module_name = "Total"
+| union cpu_metrics
+```
+
+[ ![Diagram that shows the metrics using Kusto query.](./media/telemetry-schema/metrics.png)](./media/telemetry-schema/metrics.png#lightbox)
## Logging As with other IoT Edge modules, you can also [examine the container logs](../../iot-edge/troubleshoot.md#check-container-logs-for-issues) on the edge device. You can configure the information that's written to the logs by using the [following module twin](module-twin-configuration-schema.md) properties:
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/spatial-analysis-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/spatial-analysis-tutorial.md
@@ -105,8 +105,8 @@ Follow [these steps](../../databox-online/azure-stack-edge-gpu-deploy-prep.md) t
AAD_TENANT_ID="<AAD Tenant ID>" AAD_SERVICE_PRINCIPAL_ID="<AAD SERVICE_PRINCIPAL ID>" AAD_SERVICE_PRINCIPAL_SECRET="<AAD SERVICE_PRINCIPAL ID>"
- INPUT_VIDEO_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
- OUTPUT_VIDEO_FOLDER_ON_DEVICE="/var/media"
+ VIDEO_INPUT_FOLDER_ON_DEVICE="/home/lvaadmin/samples/input"
+ VIDEO_OUTPUT_FOLDER_ON_DEVICE="/var/media"
APPDATA_FOLDER_ON_DEVICE="/var/local/mediaservices" CONTAINER_REGISTRY_USERNAME_myacr="<your container registry username>" CONTAINER_REGISTRY_PASSWORD_myacr="<your container registry password>"
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/use-your-model-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/use-your-model-quickstart.md
@@ -10,8 +10,6 @@ zone_pivot_groups: ams-lva-edge-programming-languages
This quickstart shows you how to use Live Video Analytics on IoT Edge to analyze a live video feed from a (simulated) IP camera. You'll see how to apply a computer vision model to detect objects. A subset of the frames in the live video feed is sent to an inference service. The results are sent to IoT Edge Hub.
-This quickstart uses an Azure VM as an IoT Edge device, and it uses a simulated live video stream. It's based on sample code written in C#, and it builds on the [Detect motion and emit events](detect-motion-emit-events-quickstart.md) quickstart.
- ::: zone pivot="programming-language-csharp" [!INCLUDE [header](includes/analyze-live-video-your-http-model-quickstart/csharp/header.md)] ::: zone-end
mysql https://docs.microsoft.com/en-us/azure/mysql/concepts-server-parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-server-parameters.md
@@ -25,8 +25,7 @@ Refer to the following sections below to learn more about the limits of the seve
### Thread pools
-MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there is a corresponding drop in
-formance. Many active threads can impact the performance significantly due to increased context switching, thread contention, and bad locality for CPU caches.
+MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there is a corresponding drop in performance. Many active threads can impact the performance significantly due to increased context switching, thread contention, and bad locality for CPU caches.
Thread pools which is a server side feature and distinct from connection pooling, maximize performance by introducing a dynamic pool of worker thread that can be used to limit the number of active threads running on the server and minimize thread churn. This helps ensure that a burst of connections will not cause the server to run out of resources or crash with an out of memory error. Thread pools are most efficient for short queries and CPU intensive workloads, for example OLTP workloads.
mysql https://docs.microsoft.com/en-us/azure/mysql/how-to-major-version-upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/how-to-major-version-upgrade.md
@@ -115,15 +115,7 @@ The GA of this feature is planned before MySQL v5.6 retirement. However, the fea
### Will this cause downtime of the server and if so, how long?
-Yes, the server will be unavailable during the upgrade process so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server.The upgrades of Basic SKU servers are expected to take longer time as it is on standard storage platform. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server.
-
-### It is noted that it is not supported on replica server yet. What does that mean concrete?
-
-Currently, major version upgrade is not supported for replica server, which means you should not run it for servers involved in replication (either source or replica server). If you would like to test the upgrade of the servers involved in replication before we add the replica support for upgrade feature, we would recommend following steps:
-
-1. During your planned maintenance, [stop replication and delete replica server](howto-read-replicas-portal.md) after capturing its name and all the configuration information (Firewall settings, server parameter configuration if it is different from source server).
-2. Perform upgrade of the source server.
-3. Provision a new read replica server with the same name and configuration settings captured in step 1. The new replica server will be on v5.7 automatically after the source server is upgraded to v5.7.
+Yes, the server will be unavailable during the upgrade process so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server.The upgrades of Basic SKU servers are expected to take longer time as it is on standard storage platform. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. Consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
### What will happen if we do not choose to upgrade our MySQL v5.6 server before February 5, 2021?
@@ -131,4 +123,4 @@ You can still continue running your MySQL v5.6 server as before. Azure **will ne
## Next steps
-Learn about [Azure Database for MySQL versioning policy](concepts-version-policy.md).
+Learn about [Azure Database for MySQL versioning policy](concepts-version-policy.md).
notebooks https://docs.microsoft.com/en-us/azure/notebooks/quickstart-export-jupyter-notebook-project https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notebooks/quickstart-export-jupyter-notebook-project.md
@@ -1,33 +1,17 @@
Title: Export a Jupyter Notebook project from the Azure Notebooks Preview
-description: Quickly export a Jupyter Notebook project.
+ Title: Use a Jupyter Notebook with Microsoft offerings
+description: Learn about how Jupyter Notebooks can be used with Microsoft offerings.
Previously updated : 06/29/2020 Last updated : 02/01/2021
-# Quickstart: Export a Jupyter Notebook project in Azure Notebooks Preview
+# Use a Jupyter Notebook with Microsoft offerings
[!INCLUDE [notebooks-status](../../includes/notebooks-status.md)]
-In this quickstart, you will download an Azure Notebooks project for use in other Jupyter Notebook solutions.
+In this quickstart, you will learn how to import a Jupyter Notebook for use in assorted Microsoft offerings.
-## Prerequisites
-
-An existing Azure Notebooks project.
-
-## Export an Azure Notebooks project
-
-1. Go to [Azure Notebooks](https://notebooks.azure.com) and sign in. For details, see [Quickstart - Sign in to Azure Notebooks](quickstart-sign-in-azure-notebooks.md).
-
-1. From your public profile page, select **My Projects** at the top of the page:
-
- ![My Projects link on the top of the browser window](media/quickstarts/my-projects-link.png)
-
-1. Select a project.
-1. Click the download "Download" button to trigger a zip file download that contains all of your project files.
-1. Alternatively, from a specific project page, click the "Download Project" button to download all the files of a given project.
-
-After downloading your project files, you can use them with other Jupyter Notebook solutions. Some options described in the sections below include:
+If you have existing Jupyter Notebooks or want to start a new project, you can use them with many of Microsoft's offerings. Some options described in the sections below include:
- [Visual Studio Code](#use-notebooks-in-visual-studio-code) - [GitHub Codespaces](#use-notebooks-in-github-codespaces) - [Azure Machine Learning](#use-notebooks-with-azure-machine-learning)
@@ -36,7 +20,7 @@ After downloading your project files, you can use them with other Jupyter Notebo
## Create an environment for notebooks
-If you'd like to create an environment that matches that of the Azure Notebooks Preview, you can use the script file provided in GitHub.
+If you'd like to create an environment that matches that of the retired Azure Notebooks Preview, you can use the script file provided in GitHub.
1. Navigate to the Azure Notebooks [GitHub repository](https://github.com/microsoft/AzureNotebooks) or [directly access the environment folder](https://aka.ms/aznbrequirementstxt). 1. From a command prompt, navigate to the directory you want to use for your projects.
@@ -49,14 +33,14 @@ If you'd like to create an environment that matches that of the Azure Notebooks
![VS Code Jupyter Notebook support](media/vs-code-jupyter-notebook.png)
-After [downloading](#export-an-azure-notebooks-project) your project files you can use them with VS Code. For guidance using VS Code with Jupyter Notebooks, see the [Working with Jupyter Notebooks in Visual Studio Code](https://code.visualstudio.com/docs/python/jupyter-support) and [Data Science in Visual Studio Code](https://code.visualstudio.com/docs/python/data-science-tutorial) tutorials.
+If you have existing project files or would like to create a new notebook, you can use VS Code! For guidance on using VS Code with Jupyter Notebooks, see the [Working with Jupyter Notebooks in Visual Studio Code](https://code.visualstudio.com/docs/python/jupyter-support) and [Data Science in Visual Studio Code](https://code.visualstudio.com/docs/python/data-science-tutorial) tutorials.
You can also use the [Azure Notebooks environment script](#create-an-environment-for-notebooks) with Visual Studio Code to create an environment that matches the Azure Notebooks Preview. ## Use Notebooks in GitHub Codespaces GitHub Codespaces provides cloud hosted environments where you can edit your notebooks using Visual Studio Code or in your web browser. It offers the same great Jupyter experience as VS Code, but without needing to install anything on your device. If you donΓÇÖt want to set up a local environment and prefer a cloud-backed solution, then creating a codespace is a great option. To get started:
-1. [Download](#export-an-azure-notebooks-project) your project files.
+1. (Optional) Gather any project files you would like to use with GitHub Codespaces.
1. [Create a GitHub repository](https://help.github.com/github/getting-started-with-github/create-a-repo) for storing your notebooks. 1. [Add your files](https://help.github.com/github/managing-files-in-a-repository/adding-a-file-to-a-repository) to the repository. 1. [Request Access to the GitHub Codespaces Preview](https://github.com/features/codespaces)
@@ -65,14 +49,14 @@ GitHub Codespaces provides cloud hosted environments where you can edit your not
Azure Machine Learning provides an end-to-end machine learning platform to enable users to build and deploy models faster on Azure. Azure ML allows you to run Jupyter Notebooks on a VM or a shared cluster computing environment. If you are in need of a cloud-based solution for your ML workload with experiment tracking, dataset management, and more, we recommend Azure Machine Learning. To get started with Azure ML:
-1. [Download](#export-an-azure-notebooks-project) your project files.
+1. (Optional) Gather any project files you would like to use with Azure ML.
1. [Create a Workspace](../machine-learning/how-to-manage-workspace.md) in the Azure portal. ![Create a Workspace](../machine-learning/media/how-to-manage-workspace/create-workspace.gif) 1. Open the [Azure Studio (preview)](https://ml.azure.com/). 1. Using the left-side navigation bar, select **Notebooks**.
-1. Click on the **Upload files** button and upload the project files that you downloaded from Azure Notebooks.
+1. Click on the **Upload files** button and upload the project files.
For additional information about Azure ML and running Jupyter Notebooks, you can review the [documentation](../machine-learning/how-to-run-jupyter-notebooks.md) or try the [Intro to Machine Learning](/learn/modules/intro-to-azure-machine-learning-service/) module on Microsoft Learn.
@@ -83,13 +67,13 @@ For additional information about Azure ML and running Jupyter Notebooks, you can
![image](../lab-services/media/tutorial-setup-classroom-lab/new-lab-button.png)
- After [downloading](#export-an-azure-notebooks-project) your project files you can use them with Azure Lab Services. For guidance about setting up a lab, see [Set up a lab to teach data science with Python and Jupyter Notebooks](../lab-services/class-type-jupyter-notebook.md)
+If you have existing project files or would like to create a new notebook, you can use Azure Lab Services. For guidance about setting up a lab, see [Set up a lab to teach data science with Python and Jupyter Notebooks](../lab-services/class-type-jupyter-notebook.md)
## Use GitHub GitHub provides a free, source-control-backed way to store notebooks (and other files), share your notebooks with others, and work collaboratively. If youΓÇÖre looking for a way to share your projects and collaborate with others, GitHub is a great option and can be combined with [GitHub Codespaces](#use-notebooks-in-github-codespaces) for a great development experience. To get started with GitHub
-1. [Download](#export-an-azure-notebooks-project) your project files.
+1. (Optional) Gather any project files you would like to use with GitHub.
1. [Create a GitHub repository](https://help.github.com/github/getting-started-with-github/create-a-repo) for storing your notebooks. 1. [Add your files](https://help.github.com/github/managing-files-in-a-repository/adding-a-file-to-a-repository) to the repository.
@@ -99,4 +83,4 @@ GitHub provides a free, source-control-backed way to store notebooks (and other
- [Learn about Azure Machine Learning and Jupyter Notebooks](../machine-learning/how-to-run-jupyter-notebooks.md) - [Learn about GitHub Codespaces](https://github.com/features/codespaces) - [Learn about Azure Lab Services](https://azure.microsoft.com/services/lab-services/)-- [Learn about GitHub](https://help.github.com/github/getting-started-with-github/)
+- [Learn about GitHub](https://help.github.com/github/getting-started-with-github/)
private-link https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
@@ -66,7 +66,7 @@ For Azure services, use the recommended zone names as described in the following
| Azure Event Grid (Microsoft.EventGrid/topics) / topic | privatelink.eventgrid.azure.net | eventgrid.azure.net | | Azure Event Grid (Microsoft.EventGrid/domains) / domain | privatelink.eventgrid.azure.net | eventgrid.azure.net | | Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net | azurewebsites.net |
-| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.azureml.ms | api.azureml.ms |
+| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.az