Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Leave The Organization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md | Administrators can use the **External user leave settings** to control whether e - **Yes**: Users can leave the organization themselves without approval from your admin or privacy contact. - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin, or privacy contact to request removal from your organization. - :::image type="content" source="media/leave-the-organization/external-user-leave-settings.png" alt-text="Screenshot showing External user leave settings in the portal."::: + :::image type="content" source="media/leave-the-organization/external-user-leave-settings.png" alt-text="Screenshot showing External user leave settings in the portal." lightbox="media/leave-the-organization/external-user-leave-settings.png"::: ### Account removal For B2B direct connect users, data removal begins as soon as the user selects ** - Learn more about [user deletion](/compliance/regulatory/gdpr-dsr-azure#step-5-delete) and about how to delete a user's data when there's [no account in the Azure tenant](/compliance/regulatory/gdpr-dsr-azure#delete-a-users-data-when-there-is-no-account-in-the-azure-tenant). - For more information about GDPR, see the GDPR section of the [Service Trust portal](https://servicetrust.microsoft.com/ViewPage/GDPRGetStarted).+- Learn more about [audit logs and access reviews](auditing-and-reporting.md). |
active-directory | Tutorial Bulk Invite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md | If you use Azure Active Directory (Azure AD) B2B collaboration to work with exte > * Upload the .csv file to Azure AD > * Verify the users were added to the directory -If you donΓÇÖt have Azure Active Directory, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- ## Prerequisites--You need two or more test email accounts that you can send the invitations to. The accounts must be from outside your organization. You can use any type of account, including social accounts such as gmail.com or outlook.com addresses. -+- If you donΓÇÖt have Azure Active Directory, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- You need two or more test email accounts that you can send the invitations to. The accounts must be from outside your organization. You can use any type of account, including social accounts such as gmail.com or outlook.com addresses. ## Invite guest users in bulk |
active-directory | Debug Saml Sso Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/debug-saml-sso-issues.md | -Learn how to find and fix [single sign-on](what-is-single-sign-on.md) issues for applications in Azure Active Directory (Azure AD) that use SAML-based single sign-on. +In this article, you learn how to find and fix [single sign-on](what-is-single-sign-on.md) issues for applications in Azure Active Directory (Azure AD) that use SAML-based single sign-on. ## Before you begin To download and install the My Apps Secure Sign-in Extension, use one of the fol To test SAML-based single sign-on between Azure AD and a target application: 1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator or other administrator that is authorized to manage applications.-1. In the left blade, select **Azure Active Directory**, and then select **Enterprise applications**. -1. From the list of enterprise applications, select the application for which you want to test single sign-on, and then from the options on the left select **Single sign-on**. +1. In the left navigation pane, select **Azure Active Directory**, and then select **Enterprise applications**. +1. From the list of enterprise applications, select the application for which you want to test single sign-on, and then from the options on the left, select **Single sign-on**. 1. To open the SAML-based single sign-on testing experience, go to **Test single sign-on** (step 5). If the **Test** button is greyed out, you need to fill out and save the required attributes first in the **Basic SAML Configuration** section.-1. In the **Test single sign-on** blade, use your corporate credentials to sign in to the target application. You can sign in as the current user or as a different user. If you sign in as a different user, a prompt will ask you to authenticate. +1. In the **Test single sign-on** page, use your corporate credentials to sign in to the target application. You can sign in as the current user or as a different user. If you sign in as a different user, a prompt asks you to authenticate.  To debug this error, you need the error message and the SAML request. The My App ### To resolve the sign-in error with the My Apps Secure Sign-in Extension installed -1. When an error occurs, the extension redirects you back to the Azure AD **Test single sign-on** blade. -1. On the **Test single sign-on** blade, select **Download the SAML request**. +1. When an error occurs, the extension redirects you back to the Azure AD **Test single sign-on** page. +1. On the **Test single sign-on** page, select **Download the SAML request**. 1. You should see specific resolution guidance based on the error and the values in the SAML request.-1. You'll see a **Fix it** button to automatically update the configuration in Azure AD to resolve the issue. If you don't see this button, then the sign-in issue isn't due to a misconfiguration on Azure AD. +1. You see a **Fix it** button to automatically update the configuration in Azure AD to resolve the issue. If you don't see this button, then the sign-in issue isn't due to a misconfiguration on Azure AD. If no resolution is provided for the sign-in error, we suggest that you use the feedback textbox to inform us. If no resolution is provided for the sign-in error, we suggest that you use the 1. Copy the error message at the bottom right corner of the page. The error message includes: - A CorrelationID and Timestamp. These values are important when you create a support case with Microsoft because they help the engineers to identify your problem and provide an accurate resolution to your issue. - A statement identifying the root cause of the problem.-1. Go back to Azure AD and find the **Test single sign-on** blade. +1. Go back to Azure AD and find the **Test single sign-on** page. 1. In the text box above **Get resolution guidance**, paste the error message. 1. Select **Get resolution guidance** to display steps for resolving the issue. The guidance might require information from the SAML request or SAML response. If you're not using the My Apps Secure Sign-in Extension, you might need a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML request and response. 1. Verify that the destination in the SAML request corresponds to the SAML Single Sign-on Service URL obtained from Azure AD. If no resolution is provided for the sign-in error, we suggest that you use the ## Resolve a sign-in error on the application page -You might sign in successfully and then see an error on the application's page. This occurs when Azure AD issued a token to the application, but the application doesn't accept the response. +You might sign in successfully and then see an error on the application's page. This error occurs when Azure AD issued a token to the application, but the application doesn't accept the response. To resolve the error, follow these steps, or watch this [short video about how to use Azure AD to troubleshoot SAML SSO](https://www.youtube.com/watch?v=poQCJK0WPUk&list=PLLasX02E8BPBm1xNMRdvP6GtA6otQUqp0&index=8): 1. If the application is in the Azure AD Gallery, verify that you've followed all the steps for integrating the application with Azure AD. To find the integration instructions for your application, see the [list of SaaS application integration tutorials](../saas-apps/tutorial-list.md). 1. Retrieve the SAML response.- - If the My Apps Secure Sign-in extension is installed, from the **Test single sign-on** blade, select **download the SAML response**. + - If the My Apps Secure Sign-in extension is installed, from the **Test single sign-on** page, select **download the SAML response**. - If the extension isn't installed, use a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML response. 1. Notice these elements in the SAML response token: - User unique identifier of NameID value and format To resolve the error, follow these steps, or watch this [short video about how t ## Next steps -Now that single sign-on is working to your application, you could [Automate user provisioning and de-provisioning to SaaS applications](../app-provisioning/user-provisioning.md) or [get started with Conditional Access](../conditional-access/app-based-conditional-access.md). +Now that single sign-on is working to your application, you could [Automate user provisioning and deprovisioning to SaaS applications](../app-provisioning/user-provisioning.md) or [get started with Conditional Access](../conditional-access/app-based-conditional-access.md). |
active-directory | Howto Saml Token Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md | To configure enterprise application's SAML token encryption, follow these steps: Create an asymmetric key pair to use for encryption. Or, if the application supplies a public key to use for encryption, follow the application's instructions to download the X.509 certificate. - The public key should be stored in an X.509 certificate file in .cer format. -+ The public key should be stored in an X.509 certificate file in .cer format. You can copy the contents of the certificate file to a text editor and save it as a .cer file. The certificate file should contain only the public key and not the private key. + If the application uses a key that you create for your instance, follow the instructions provided by your application for installing the private key that the application will use to decrypt tokens from your Azure AD tenant. 1. Add the certificate to the application configuration in Azure AD. You can add the public cert to your application configuration within the Azure p 1. Go to the [Azure portal](https://portal.azure.com). -1. Go to the **Azure Active Directory > Enterprise applications** blade and then select the application that you wish to configure token encryption for. +1. Search for and select the **Azure Active Directory**. ++1. Select **Enterprise applications** blade and then select the application that you wish to configure token encryption for. 1. On the application's page, select **Token encryption**. To configure token encryption, follow these steps: 1. In the application's page, select **Manifest** to edit the [application manifest](../develop/reference-app-manifest.md). -1. Set the value for the `tokenEncryptionKeyId` attribute. - The following example shows an application manifest configured with two encryption certificates, and with the second selected as the active one using the tokenEncryptionKeyId. ```json To configure token encryption, follow these steps: } ``` -# [PowerShell](#tab/azure-powershell) +# [Azure AD PowerShell](#tab/azuread-powershell) 1. Use the latest Azure AD PowerShell module to connect to your tenant. To configure token encryption, follow these steps: $app.TokenEncryptionKeyId ``` +# [Microsoft Graph PowerShell](#tab/msgraph-powershell) +1. Use the Microsoft Graph PowerShell module to connect to your tenant. ++1. Set the token encryption settings using the **[Update-MgApplication](/powershell/module/microsoft.graph.applications/update-mgapplication?view=graph-powershell-1.0&preserve-view=true)** command. ++ ```powershell ++ Update-MgApplication -ApplicationId <ApplicationObjectId> -KeyCredentials "<KeyCredentialsObject>" -TokenEncryptionKeyId <keyID> ++ ``` ++1. Read the token encryption settings using the following commands. ++ ```powershell ++ $app=Get-MgApplication -ApplicationId <ApplicationObjectId> ++ $app.KeyCredentials ++ $app.TokenEncryptionKeyId ++ ``` # [Microsoft Graph](#tab/microsoft-graph) 1. Update the application's `keyCredentials` with an X.509 certificate for encryption. The following example shows a Microsoft Graph JSON payload with a collection of key credentials associated with the application. To configure token encryption, follow these steps: - ## Next steps * Find out [How Azure AD uses the SAML protocol](../develop/active-directory-saml-protocol-reference.md) |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md | Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 06/06/2023 Last updated : 07/04/2023 +## June 2023 ++### Updated articles ++- [Manage consent to applications and evaluate consent requests](manage-consent-requests.md) +- [Plan application migration to Azure Active Directory](migrate-adfs-apps-phases-overview.md) +- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort](silverfort-integration.md) +- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta.md) +- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards](datawiza-sso-oracle-jde.md) +- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle PeopleSoft](datawiza-sso-oracle-peoplesoft.md) +- [Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access](cloudflare-integration.md) +- [Configure Datawiza for Azure AD Multi-Factor Authentication and single sign-on to Oracle EBS](datawiza-sso-mfa-oracle-ebs.md) +- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md) +- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md) + ## May 2023 ### New articles Welcome to what's new in Azure Active Directory (Azure AD) application managemen - [Configure F5 BIG-IP Access Policy Manager for form-based SSO](f5-big-ip-forms-advanced.md) - [Tutorial: Configure F5 BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md) - [Tutorial: Configure F5 BIG-IP Access Policy Manager for header-based single sign-on](f5-big-ip-header-advanced.md)-## March 2023 --### Updated articles --- [Move application authentication to Azure Active Directory](migrate-adfs-apps-to-azure.md)-- [Quickstart: Create and assign a user account](add-application-portal-assign-users.md)-- [Configure sign-in behavior using Home Realm Discovery](configure-authentication-for-federated-users-portal.md)-- [Disable auto-acceleration sign-in](prevent-domain-hints-with-home-realm-discovery.md)-- [Review permissions granted to enterprise applications](manage-application-permissions.md)-- [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)-- [Configure permission classifications](configure-permission-classifications.md)-- [Restrict access to a tenant](tenant-restrictions.md)-- [Tutorial: Migrate Okta sign-on policies to Azure Active Directory Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)-- [Delete an enterprise application](delete-application-portal.md)-- [Restore an enterprise application in Azure AD](restore-application.md) |
active-directory | Cloud Attendance Management System King Of Time Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloud-attendance-management-system-king-of-time-tutorial.md | + + Title: Azure Active Directory SSO integration with CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME +description: Learn how to configure single sign-on between Azure Active Directory and CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME. ++++++++ Last updated : 07/02/2023+++++# Azure Active Directory SSO integration with CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME ++In this article, you'll learn how to integrate CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME with Azure Active Directory (Azure AD). CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME is No.1 in the attendance management system market share "KING OF TIME" reached 2.77 million active users as of April 2023. It is a cloud attendance management system with high satisfaction, recognition, and the No. 1 market share. From offices and stores to teleworking and telecommuting in an emergency. Efficient attendance management that has become complicated by paper time cards and Excel is automatically aggregated. When you integrate CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME with Azure AD, you can: ++* Control in Azure AD who has access to CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME. +* Enable your users to be automatically signed-in to CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME in a test environment. CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME supports **SP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME from the Azure AD gallery ++Add CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME from the Azure AD application gallery to configure single sign-on with CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type one of the following URLs: + + | **Identifier** | + || + | `https://s3.ta.kingoftime.jp/saml/v2.0/acs` | + | `https://s2.ta.kingoftime.jp/saml/v2.0/acs` | ++ b. In the **Reply URL** textbox, type one of the following URLs: + + | **Reply URL** | + || + | `https://s2.ta.kingoftime.jp/saml/v2.0/acs` | + | `https://s3.ta.kingoftime.jp/saml/v2.0/acs` | ++ c. In the **Sign on URL** textbox, type one of the following URLs: ++ | **Sign on URL** | + |-| + | `https://s2.ta.kingoftime.jp/admin` | + | `https://s3.ta.kingoftime.jp/admin` | + | `https://s2.ta.kingoftime.jp/independent/recorder2/personal` | + | `https://s3.ta.kingoftime.jp/independent/recorder2/personal` | ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME SSO ++To configure single sign-on on **CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME support team](https://www.kingoftime.jp/contact/). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME test user ++In this section, you create a user called Britta Simon in CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME. Work with [CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME support team](https://www.kingoftime.jp/contact/) to add the users in the CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME Sign-on URL where you can initiate the login flow. ++* Go to CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME tile in the My Apps, this will redirect to CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure CLOUD ATTENDANCE MANAGEMENT SYSTEM KING OF TIME you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Connect1 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/connect1-tutorial.md | + + Title: Azure Active Directory SSO integration with Connect1 +description: Learn how to configure single sign-on between Azure Active Directory and Connect1. ++++++++ Last updated : 07/02/2023+++++# Azure Active Directory SSO integration with Connect1 ++In this article, you'll learn how to integrate Connect1 with Azure Active Directory (Azure AD). Connect1 provides complete fleet analytics, real-time status of your fleet, viewing historical trends, receiving on-demand notifications, alerts, and reporting, creating geofences. When you integrate Connect1 with Azure AD, you can: ++* Control in Azure AD who has access to Connect1. +* Enable your users to be automatically signed-in to Connect1 with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Connect1 in a test environment. Connect1 supports both **SP** and **IDP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with Connect1, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Connect1 single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Connect1 application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Connect1 from the Azure AD gallery ++Add Connect1 from the Azure AD application gallery to configure single sign-on with Connect1. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Connect1** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type the URL: + `https://pct.phillips-connect.com` ++1. Connect1 application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++1. In addition to above, Connect1 application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | companyname | user.companyname | + | roles | user.assignedroles | ++ > [!NOTE] + > Please click [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to know how to configure Role in Azure AD. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Connect1** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Connect1 SSO ++To configure single sign-on on **Connect1** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Connect1 support team](mailto:xirgo_mis@sensata.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Connect1 test user ++In this section, you create a user called Britta Simon at Connect1 SSO. Work with [Connect1 SSO support team](mailto:xirgo_mis@sensata.com) to add the users in the Connect1 SSO platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to Connect1 Sign-on URL where you can initiate the login flow. ++* Go to Connect1 Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the Connect1 for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the Connect1 tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Connect1 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Connect1 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Db Education Portal For Schools Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/db-education-portal-for-schools-tutorial.md | + + Title: Azure Active Directory SSO integration with DB Education Portal for Schools +description: Learn how to configure single sign-on between Azure Active Directory and DB Education Portal for Schools. ++++++++ Last updated : 07/02/2023+++++# Azure Active Directory SSO integration with DB Education Portal for Schools ++In this article, you'll learn how to integrate DB Education Portal for Schools with Azure Active Directory (Azure AD). Providing single sign-on access through Azure AD, for the DB Education Portal, available for Schools and Multi Academy Trusts across the United Kingdom. When you integrate DB Education Portal for Schools with Azure AD, you can: ++* Control in Azure AD who has access to DB Education Portal for Schools. +* Enable your users to be automatically signed-in to DB Education Portal for Schools with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for DB Education Portal for Schools in a test environment. DB Education Portal for Schools supports **SP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with DB Education Portal for Schools, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* DB Education Portal for Schools single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the DB Education Portal for Schools application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add DB Education Portal for Schools from the Azure AD gallery ++Add DB Education Portal for Schools from the Azure AD application gallery to configure single sign-on with DB Education Portal for Schools. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **DB Education Portal for Schools** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type the value: + `DBEducation` ++ b. In the **Reply URL** textbox, type a URL using one of the following patterns: + + | **Reply URL** | + || + | `https://intranet.<CustomerName>.domain.extension/governorintranet/wp-login.php?saml_acs` | + | `https://portal.<CustomerName>.domain.extension/governorintranet/wp-login.php?saml_acs` | + | `https://intranet.<CustomerName>.domain.extension/studentportal/wp-login.php?saml_acs` | + | `https://portal.<CustomerName>.domain.extension/studentportal/wp-login.php?saml_acs` | + | `https://intranet.<CustomerName>.domain.extension/staffportal/wp-login.php?saml_acs` | + | `https://portal.<CustomerName>.domain.extension/staffportal/wp-login.php?saml_acs` | + | `https://intranet.<CustomerName>.domain.extension/parentportal/wp-login.php?saml_acs` | + | `https://portal.<CustomerName>.domain.extension/parentportal/wp-login.php?saml_acs` | + | `https://intranet.<CustomerName>.domain.extension/familyportal/wp-login.php?saml_acs` | + | `https://portal.<CustomerName>.domain.extension/familyportal/wp-login.php?saml_acs` | ++ c. In the **Sign on URL** textbox, type a URL using one of the following patterns: ++ | **Sign on URL** | + |-| + | `https://portal.<CustomerName>.domain.extension` | + | `https://intranet.<CustomerName>.domain.extension` | ++ > [!NOTE] + > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [DB Education Portal for Schools support team](mailto:contact@dbeducation.org.uk) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. DB Education Portal for Schools application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++1. In addition to above, DB Education Portal for Schools application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | groups | user.groups | ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++  ++## Configure DB Education Portal for Schools SSO ++To configure single sign-on on **DB Education Portal for Schools** side, you need to send the **App Federation Metadata Url** to [DB Education Portal for Schools support team](mailto:contact@dbeducation.org.uk). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create DB Education Portal for Schools test user ++In this section, you create a user called Britta Simon at DB Education Portal for Schools SSO. Work with [DB Education Portal for Schools SSO support team](mailto:contact@dbeducation.org.uk) to add the users in the DB Education Portal for Schools SSO platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to DB Education Portal for Schools Sign-on URL where you can initiate the login flow. ++* Go to DB Education Portal for Schools Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the DB Education Portal for Schools tile in the My Apps, this will redirect to DB Education Portal for Schools Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure DB Education Portal for Schools you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Optiturn Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/optiturn-tutorial.md | + + Title: Azure Active Directory SSO integration with OptiTurn +description: Learn how to configure single sign-on between Azure Active Directory and OptiTurn. ++++++++ Last updated : 07/02/2023+++++# Azure Active Directory SSO integration with OptiTurn ++In this article, you'll learn how to integrate OptiTurn with Azure Active Directory (Azure AD). OptiTurn is a returns management platform that helps retailers route returned items, improve warehouse operations, and manage inventory backlogs. When you integrate OptiTurn with Azure AD, you can: ++* Control in Azure AD who has access to OptiTurn. +* Enable your users to be automatically signed-in to OptiTurn with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for OptiTurn in a test environment. OptiTurn supports **SP** initiated single sign-on and **Just In Time** user provisioning. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with OptiTurn, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* OptiTurn single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the OptiTurn application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add OptiTurn from the Azure AD gallery ++Add OptiTurn from the Azure AD application gallery to configure single sign-on with OptiTurn. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **OptiTurn** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type the URL: + `https://optiturn.com/sp` ++ b. In the **Reply URL** textbox, type a URL using one of the following patterns: + + | **Reply URL** | + || + | `https://optiturn.com/auth/saml/<Customer_Name>_azure_saml/callback` | + | `https://<Environment>.optiturn.com/auth/saml/<Customer_Name>_azure_saml/callback` | ++ c. In the **Sign on URL** textbox, type one of the following URL/pattern: ++ | **Sign on URL** | + |-| + | ` https://optiturn.com/session/new` | + | `https://<Environment>.optiturn.com/session/new` | ++ > [!NOTE] + > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [OptiTurn support team](mailto:support@optoro.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. OptiTurn application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++1. In addition to above, OptiTurn application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | email | user.mail | + | first_name | user.givenname | + | last_name | user.surname | ++ > [!Note] + > The warehouse_identifier assertion attribute is recommended for ease of use, but it is not required. warehouse_identifier is an identifier for the warehouse where a given employee is physically located. We will match the identifier against a warehouse that is configured in OptiTurn. The userΓÇÖs activity and data will then be ΓÇ£scopedΓÇ¥ to that warehouse. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up OptiTurn** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure OptiTurn SSO ++To configure single sign-on on **OptiTurn** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [OptiTurn support team](mailto:support@optoro.com). They set this setting to have the SAML SSO connection set properly on both sides ++### Create OptiTurn test user ++In this section, a user called B.Simon is created in OptiTurn. OptiTurn supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in OptiTurn, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to OptiTurn Sign-on URL where you can initiate the login flow. ++* Go to OptiTurn Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the OptiTurn tile in the My Apps, this will redirect to OptiTurn Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure OptiTurn you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Oracle Fusion Erp Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-fusion-erp-provisioning-tutorial.md | This section guides you through the steps to configure the Azure AD provisioning > You may also choose to enable SAML-based single sign-on for Oracle Fusion ERP by following the instructions provided in the [Oracle Fusion ERP Single sign-on tutorial](oracle-fusion-erp-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features complement each other. > [!NOTE]-> To learn more about Oracle Fusion ERP's SCIM endpoint, refer to [REST API for Common Features in Oracle Applications Cloud](https://docs.oracle.com/en/cloud/saas/applications-common/18b/farca/https://docsupdatetracker.net/index.html). +> To learn more about Oracle Fusion ERP's SCIM endpoint, refer to [REST API for Common Features in Oracle Applications Cloud](https://docs.oracle.com/en/cloud/saas/applications-common/23b/farca/https://docsupdatetracker.net/index.html). ### To configure automatic user provisioning for Fuze in Azure AD: |
active-directory | Surfconext Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/surfconext-tutorial.md | + + Title: Azure Active Directory SSO integration with SURFconext +description: Learn how to configure single sign-on between Azure Active Directory and SURFconext. ++++++++ Last updated : 07/02/2023+++++# Azure Active Directory SSO integration with SURFconext ++In this article, you'll learn how to integrate SURFconext with Azure Active Directory (Azure AD). SURF connected institutions can use SURFconext to log in to many cloud applications with their institution credentials. When you integrate SURFconext with Azure AD, you can: ++* Control in Azure AD who has access to SURFconext. +* Enable your users to be automatically signed-in to SURFconext with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for SURFconext in a test environment. SURFconext supports **SP** initiated single sign-on and **Just In Time** user provisioning. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with SURFconext, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* SURFconext single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the SURFconext application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add SURFconext from the Azure AD gallery ++Add SURFconext from the Azure AD application gallery to configure single sign-on with SURFconext. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **SURFconext** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type one of the following URLs: ++ | Environment | URL | + |-|| + | Production |`https://engine.surfconext.nl/authentication/sp/metadata` | + | Staging |`https://engine.test.surfconext.nl/authentication/sp/metadata` | ++ b. In the **Reply URL** textbox, type one of the following URLs: ++ | Environment | URL | + |-|| + | Production | `https://engine.surfconext.nl/authentication/sp/consume-assertion` | + | Staging | `https://engine.test.surfconext.nl/authentication/sp/consume-assertion` | ++ c. In the **Sign on URL** textbox, type one of the following URLs: ++ | Environment | URL | + |-|| + | Production | `https://engine.surfconext.nl/authentication/sp/debug` | + | Staging | `https://engine.test.surfconext.nl/authentication/sp/debug` | ++1. SURFconext application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++ > [!Note] + > You can remove or delete these default attributes manually under Additional claims section, if it is not required. ++1. In addition to above, SURFconext application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | urn:mace:dir:attribute-def:cn | user.displayname | + | urn:mace:dir:attribute-def:displayName | user.displayname | + | urn:mace:dir:attribute-def:eduPersonPrincipalName | user.userprincipalname | + | urn:mace:dir:attribute-def:givenName | user.givenname | + | urn:mace:dir:attribute-def:mail | user.mail | + | urn:mace:dir:attribute-def:preferredLanguage | user.preferredlanguage | + | urn:mace:dir:attribute-def:sn | user.surname | + | urn:mace:dir:attribute-def:uid | user.userprincipalname | + | urn:mace:terena.org:attribute-def:schacHomeOrganization | user.userprincipalname | ++1. To perform Transform operation for **urn:mace:terena.org:attribute-def:schacHomeOrganization** claim, select **Transformation** button as a Source under **Manage claim** section. ++1. In the **Manage transformation** page, perform the following steps: ++  ++ 1. Select **Extract()** from the dropdown in **Transformation** field and click **After matching** button. ++ 1. Select **Attribute** as a **Parameter 1 (Input)**. ++ 1. In the **Attribute name** field, select **user.userprinciplename** from the dropdown. ++ 1. Select **@** value from the dropdown. ++ 1. Click **Add**. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++  ++## Configure SURFconext SSO ++To configure single sign-on on **SURFconext** side, you need to send the **App Federation Metadata Url** to [SURFconext support team](mailto:support@surfconext.nl). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create SURFconext test user ++In this section, a user called B.Simon is created in SURFconext. SURFconext supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in SURFconext, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to SURFconext Sign-on URL where you can initiate the login flow. ++* Go to SURFconext Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the SURFconext tile in the My Apps, this will redirect to SURFconext Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure SURFconext you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
app-service | Deploy Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md | OpenID Connect is an authentication method that uses short-lived tokens. Setting ("credential.json" contains the following content) { "name": "<CREDENTIAL-NAME>",- "issuer": "https://token.actions.githubusercontent.com/", + "issuer": "https://token.actions.githubusercontent.com", "subject": "repo:organization/repository:ref:refs/heads/main", "description": "Testing", "audiences": [ |
app-service | How To Create From Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-create-from-template.md | parameterPath="PATH/azuredeploy.parameters.json" az deployment group create --resource-group "YOUR-RG-NAME-HERE" --template-file $templatePath --parameters $parameterPath ``` -It can take more than one hour for the App Service Environment to be created. +Creating the App Service Environment usually takes about an hour, but if it is a zone redundant App Service Environment or we are experiencing unexpected demand in a region, the creation process can take several hours to complete. ## Next steps |
automation | Extension Based Hybrid Runbook Worker Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md | Azure Automation stores and manages runbooks and then delivers them to one or mo | Windows (x64) | Linux (x64) | |||-| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709, and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 (excluding Server Core) <br> ● Windows 10 Enterprise (including multi-session) and Pro | ● Debian GNU/Linux 8, 9, 10, and 11 <br> ● Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7, and 8ΓÇ»</br> *Hybrid Worker extension would follow support timelines of the OS vendor.| +| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709, and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 (excluding Server Core) <br> ● Windows 10 Enterprise (including multi-session) and Pro | ● Debian GNU/Linux 8, 9, 10, and 11 <br> ● Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7, and 8ΓÇ»</br> ● Oracle Linux 7 and 8 <br> *Hybrid Worker extension would follow support timelines of the OS vendor*.| ### Other Requirements |
automation | Migrate Existing Agent Based Hybrid Worker To Extension Based Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md | The purpose of the Extension-based approach is to simplify the installation and | Windows (x64) | Linux (x64) | |||-| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 (excluding Server Core) <br> ● Windows 10 Enterprise (including multi-session) and Pro| ● Debian GNU/Linux 8,9,10, and 11 <br> ● Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7, and 8 </br> *Hybrid Worker extension would follow support timelines of the OS vendor.ΓÇ»| +| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 (excluding Server Core) <br> ● Windows 10 Enterprise (including multi-session) and Pro| ● Debian GNU/Linux 8,9,10, and 11 <br> ● Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7, and 8 </br> ● Oracle Linux 7 and 8 <br> *Hybrid Worker extension would follow support timelines of the OS vendor*.ΓÇ»| ### Other Requirements |
azure-monitor | Alerts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md | Last updated 07/19/2022 + # What are Azure Monitor alerts? Alerts help you detect and address issues before users notice them by proactively notifying you when Azure Monitor data indicates there might be a problem with your infrastructure or application. This table provides a brief description of each alert type. For more information |[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.| |[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches defined conditions. Resource Health alerts and Service Health alerts are activity log alerts that report on your service and resource health.| |[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.|-|[Prometheus alerts](alerts-types.md#prometheus-alerts)|Prometheus alerts are used for alerting on the performance and health of Kubernetes clusters, including Azure Kubernetes Service (AKS). The alert rules are based on PromQL, which is an open-source query language.| +|[Prometheus alerts](alerts-types.md#prometheus-alerts)|Prometheus alerts are used for alerting on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). The alert rules are based on the PromQL open-source query language.| + ## Recommended alert rules If you don't have alert rules defined for the selected resource, you can [enable recommended out-of-the-box alert rules in the Azure portal](alerts-manage-alert-rules.md#enable-recommended-alert-rules-in-the-azure-portal). For information about pricing, see [Azure Monitor pricing](https://azure.microso - [Learn about action groups](../alerts/action-groups.md) - [Learn about alert processing rules](alerts-action-rules.md) - [Manage your alerts programmatically](alerts-manage-alert-instances.md#manage-your-alerts-programmatically)+ |
azure-monitor | Alerts Using Migration Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-using-migration-tool.md | - Title: Migrate Azure Monitor alert rules -description: Learn how to use the voluntary migration tool to migrate your classic alert rules. - Previously updated : 2/23/2022---# Use the voluntary migration tool to migrate your classic alert rules --As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**. --A migration tool is available in the Azure portal to customers who used classic alert rules and who want to trigger migration themselves. This article explains how to use the migration tool. --## Before you migrate --The migration process converts classic alert rules to new, equivalent alert rules, and creates action groups. In preparation, be aware of the following points: --- Both the notification payload format and the APIs to create and manage new alert rules are different from classic alert rules because they support more features. [Learn how to prepare for the migration](alerts-prepare-migration.md).--- Some classic alert rules cannot be migrated by using the tool. [Learn which rules cannot be migrated and what to do with them](alerts-understand-migration.md#manually-migrating-classic-alerts-to-newer-alerts).-- > [!NOTE] - > The migration process won't impact the evaluation of your classic alert rules. They'll continue to run and send alerts until they're migrated and the new alert rules take effect. --## How to use the migration tool --To trigger the migration of your classic alert rules in the Azure portal, follow these steps: --1. In [Azure portal](https://portal.azure.com), select **Monitor**. --1. Select **Alerts**, and then select **Manage alert rules** or **View classic alerts**. --1. Select **Migrate to new rules** to go to the migration landing page. This page shows a list of all your subscriptions and their migration status: --  -- All subscriptions that can be migrated by using the tool are marked as **Ready to migrate**. -- > [!NOTE] - > The migration tool is rolling out in phases to all the subscriptions that use classic alert rules. In the early phases of the rollout, you might see some subscriptions marked as not ready for migration. --1. Select one or more subscriptions, and then select **Preview migration**. -- The resulting page shows the details of classic alert rules that will be migrated for one subscription at a time. You can also select **Download the migration details for this subscription** to get the details in a CSV format. --  --1. Specify one or more email addresses to be notified of migration status. You'll receive email when the migration is complete or if any action is needed from you. --1. Select **Start Migration**. Read the information shown in the confirmation dialog box and confirm that you're ready to start the migration process. -- > [!IMPORTANT] - > After you initiate migration for a subscription, you won't be able to edit or create classic alert rules for that subscription. This restriction ensures that no changes to your classic alert rules are lost during migration to the new rules. Although you won't be able to change your classic alert rules, they'll still continue to run and to provide alerts until they've been migrated. After the migration is complete for your subscription, you can't use classic alert rules anymore. --  --1. When migration is complete, or if action is required from you, you'll receive an email at the addresses that you provided earlier. You can also periodically check the status at the migration landing page in the portal. --## Frequently asked questions --### Why is my subscription listed as not ready for migration? --The migration tool is rolling out to customers in phases. In the early phases, most or all of your subscriptions might be marked as **Not ready for migration**. --When a subscription becomes ready for migration, the subscription owner will receive an email message stating that the tool is available. Keep an eye out for this message. --### Who can trigger the migration? --Users who have the Monitoring Contributor role assigned to them at the subscription level can trigger the migration. [Learn more about Azure role-based access control for the migration process](alerts-understand-migration.md#who-can-trigger-the-migration). --### How long will the migration take? --Migration is completed for most subscriptions in under an hour. You can keep track of the migration progress on the migration landing page. During the migration, be assured that your alerts are still running either in the classic alerts system or in the new one. --### What can I do if I run into a problem during migration? --See the [troubleshooting guide](alerts-understand-migration.md#common-problems-and-remedies) for help with problems you might face during migration. If any action is needed from you to complete the migration, you'll be notified at the email addresses you provided when you set up the tool. --## Next steps --- [Prepare for the migration](alerts-prepare-migration.md)-- [Understand how the migration tool works](alerts-understand-migration.md) |
azure-monitor | Kql Machine Learning Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/kql-machine-learning-azure-monitor.md | In this tutorial, you learn how to: > [!NOTE] > This tutorial provides links to a Log Analytics demo environment in which you can run the KQL query examples. However, you can implement the same KQL queries and principals in all [Azure Monitor tools that use KQL](log-query-overview.md). + ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A workspace with log data.++ ## Create a time series Use the KQL `make-series` operator to create a time series. |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Before starting your move operation, review the [checklist](./move-resource-grou > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - |-> | backupvaults | [**Yes**](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-resource-group) | [**Yes**](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-subscription) | No | +> | backupvaults | [**Yes**](../../backup/create-manage-backup-vault.md#use-azure-portal-to-move-backup-vault-to-a-different-resource-group) | [**Yes**](../../backup/create-manage-backup-vault.md#use-azure-portal-to-move-backup-vault-to-a-different-subscription) | No | ## Microsoft.DataShare |
backup | Azure Kubernetes Service Cluster Backup Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-using-cli.md | For more information on the supported scenarios, limitations, and availability, A Backup vault is a management entity in Azure that stores backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data. -Before you create a Backup vault, choose the storage redundancy of the data in the vault, and then create the Backup vault with that storage redundancy and the location. Learn more about [creating a Backup vault](backup-vault-overview.md#create-a-backup-vault). +Before you create a Backup vault, choose the storage redundancy of the data in the vault, and then create the Backup vault with that storage redundancy and the location. Learn more about [creating a Backup vault](create-manage-backup-vault.md#create-a-backup-vault). >[!Note] >Though the selected vault may have the *global-redundancy* setting, backup for AKS currently supports **Operational Tier** only. All backups are stored in your subscription in the same region as that of the AKS cluster, and they aren't copied to Backup vault storage. |
backup | Azure Kubernetes Service Cluster Backup Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-using-powershell.md | For more information on the supported scenarios, limitations, and availability, A Backup vault is a management entity in Azure that stores backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers and Azure Disks. Backup vaults make it easy to organize your backup data while minimizing management overhead. They are based on the Azure Resource Manager model, which provides enhanced capabilities to help secure backup data. Before you create a Backup vault, choose the storage redundancy of the data in the vault, and then create the Backup vault with that storage redundancy and the location. -Here, we're creating a Backup vault *TestBkpVault* in *West US* region under the resource group *testBkpVaultRG*. Use the `New-AzDataProtectionBackupVault` cmdlet to create a Backup vault. Learn more about [creating a Backup vault](backup-vault-overview.md#create-a-backup-vault). +Here, we're creating a Backup vault *TestBkpVault* in *West US* region under the resource group *testBkpVaultRG*. Use the `New-AzDataProtectionBackupVault` cmdlet to create a Backup vault. Learn more about [creating a Backup vault](create-manage-backup-vault.md#create-a-backup-vault). >[!Note] >Though the selected vault may have the *global-redundancy* setting, backup for AKS currently supports **Operational Tier** only. All backups are stored in your subscription in the same region as that of the AKS cluster, and they aren't copied to Backup vault storage. |
backup | Azure Kubernetes Service Cluster Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md | A Backup vault is a management entity that stores recovery points created over t >[!Note] >The Backup vault is a new resource used for backing up newly supported workloads and is different from the already existing Recovery Services vault. -Learn [how to create a Backup vault](backup-vault-overview.md#create-a-backup-vault). +Learn [how to create a Backup vault](create-manage-backup-vault.md#create-a-backup-vault). ## Create a backup policy |
backup | Backup Blobs Storage Account Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-cli.md | See the [prerequisites](./blob-backup-configure-manage.md#before-you-start) and Backup vault is a storage entity in Azure that stores backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, and blobs in a storage account and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data. -Before creating a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the Backup vault with that storage redundancy and the location. In this article, we'll create a Backup vault _TestBkpVault_, in the region _westus_, under the resource group _testBkpVaultRG_. Use the [az dataprotection vault create](/cli/azure/dataprotection/backup-vault#az-dataprotection-backup-vault-create) command to create a Backup vault. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +Before creating a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the Backup vault with that storage redundancy and the location. In this article, we'll create a Backup vault _TestBkpVault_, in the region _westus_, under the resource group _testBkpVaultRG_. Use the [az dataprotection vault create](/cli/azure/dataprotection/backup-vault#az-dataprotection-backup-vault-create) command to create a Backup vault. Learn more about [creating a Backup vault](./create-manage-backup-vault.md#create-a-backup-vault). ```azurecli-interactive az dataprotection backup-vault create -g testBkpVaultRG --vault-name TestBkpVault -l westus --type SystemAssigned --storage-settings datastore-type="VaultStore" type="LocallyRedundant" |
backup | Backup Blobs Storage Account Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-ps.md | See the [prerequisites](./blob-backup-configure-manage.md#before-you-start) and A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, Azure blobs and Azure blobs. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data. -Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault _TestBkpVault_ in region _westus_, under the resource group _testBkpVaultRG_. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a backup vault.Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault _TestBkpVault_ in region _westus_, under the resource group _testBkpVaultRG_. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a backup vault.Learn more about [creating a Backup vault](./create-manage-backup-vault.md#create-a-backup-vault). ```azurepowershell-interactive $storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant/GeoRedundant -DataStoreType VaultStore |
backup | Backup Managed Disks Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-cli.md | For information on the Azure Disk backup region availability, supported scenario Backup vault is a storage entity in Azure that stores backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, blobs in a storage account, and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data. -Before you create a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the Backup vault with that storage redundancy and the location. In this article, we'll create a Backup vault _TestBkpVault_, in the region _westus_, under the resource group _testBkpVaultRG_. Use the [az dataprotection vault create](/cli/azure/dataprotection/backup-vault#az-dataprotection-backup-vault-create) command to create a Backup vault. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +Before you create a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the Backup vault with that storage redundancy and the location. In this article, we'll create a Backup vault _TestBkpVault_, in the region _westus_, under the resource group _testBkpVaultRG_. Use the [az dataprotection vault create](/cli/azure/dataprotection/backup-vault#az-dataprotection-backup-vault-create) command to create a Backup vault. Learn more about [creating a Backup vault](./create-manage-backup-vault.md#create-a-backup-vault). ```azurecli-interactive az dataprotection backup-vault create -g testBkpVaultRG --vault-name TestBkpVault -l westus --type SystemAssigned --storage-settings datastore-type="VaultStore" type="LocallyRedundant" |
backup | Backup Managed Disks Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-ps.md | For information on the Azure Disk backup region availability, supported scenario A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data. -Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault "TestBkpVault" in "westus" region under the resource group "testBkpVaultRG". Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a backup vault.Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault "TestBkpVault" in "westus" region under the resource group "testBkpVaultRG". Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a backup vault.Learn more about [creating a Backup vault](./create-manage-backup-vault.md#create-a-backup-vault). ```azurepowershell-interactive $storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant/GeoRedundant -DataStoreType VaultStore |
backup | Backup Managed Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks.md | A Backup vault is a storage entity in Azure that holds backup data for various n  -1. In the **Basics** tab, provide subscription, resource group, backup vault name, region, and backup storage redundancy. Continue by selecting **Review + create**. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +1. In the **Basics** tab, provide subscription, resource group, backup vault name, region, and backup storage redundancy. Continue by selecting **Review + create**. Learn more about [creating a Backup vault](./create-manage-backup-vault.md#create-a-backup-vault).  |
backup | Backup Postgresql Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-postgresql-cli.md | Backup vault is a storage entity in Azure. This stores the backup data for new w Before you create a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the Backup vault with that storage redundancy and the location. -In this article, we'll create a Backup vault _TestBkpVault_, in the region _westus_, under the resource group _testBkpVaultRG_. Use the [az dataprotection vault create](/cli/azure/dataprotection/backup-vault#az-dataprotection-backup-vault-create) command to create a Backup vault. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +In this article, we'll create a Backup vault _TestBkpVault_, in the region _westus_, under the resource group _testBkpVaultRG_. Use the [az dataprotection vault create](/cli/azure/dataprotection/backup-vault#az-dataprotection-backup-vault-create) command to create a Backup vault. Learn more about [creating a Backup vault](./create-manage-backup-vault.md#create-a-backup-vault). ```azurecli-interactive az dataprotection backup-vault create -g testBkpVaultRG --vault-name TestBkpVault -l westus --type SystemAssigned --storage-settings datastore-type="VaultStore" type="LocallyRedundant" |
backup | Backup Postgresql Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-postgresql-ps.md | Backup vault is a storage entity in Azure that stores the backup data for variou Before you creat a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. -In this article, we'll create a Backup vault *TestBkpVault*, in *westus* region, under the resource group *testBkpVaultRG*. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a Backup vault. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). +In this article, we'll create a Backup vault *TestBkpVault*, in *westus* region, under the resource group *testBkpVaultRG*. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault) command to create a Backup vault. Learn more about [creating a Backup vault](./create-manage-backup-vault.md#create-a-backup-vault). ```azurepowershell-interactive $storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant/GeoRedundant -DataStoreType VaultStore |
backup | Backup Support Matrix Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md | Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 05/15/2023 Last updated : 07/05/2023 Recovery points on DPM or MABS disk | 64 for file servers, and 448 for app serve **Restore disk** | This option restores a VM disk, which can you can then use to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings and create a VM.<br/><br/> The disks are copied to the resource group that you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM by using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured via the template or PowerShell. **Replace existing** | You can restore a disk and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it has been deleted, you can't use this option.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and it stores the snapshot in the staging location that you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>This option is supported for unencrypted managed VMs and for VMs [created from custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or fewer disks than the current VM, the number of disks in the restore point will only reflect the VM configuration.<br><br> This option is also supported for VMs with linked resources, like [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) and [Azure Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | You can use cross-region restore to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Backup admins and app admins have permissions to perform the restore operation on a secondary region.-**Cross Subscription (preview)** | You can use cross-subscription restore to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription from restore points. This is one of the Azure role-based access control (RBAC) capabilities. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-subscription restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed), [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups), and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup). +**Cross Subscription (preview)** | You can use cross-subscription restore to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (within the same tenant as the source subscription) from restore points. This is one of the Azure role-based access control (RBAC) capabilities. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-subscription restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed), [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups), and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup). **Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup). ## Support for file-level restore |
backup | Backup Vault Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md | Title: Overview of Backup vaults + Title: Overview of the Backup vaults description: An overview of Backup vaults. Previously updated : 06/06/2023 Last updated : 07/05/2023 This section discusses the options available for encrypting your backup data sto By default, all your data is encrypted using platform-managed keys. You don't need to take any explicit action from your end to enable this encryption. It applies to all workloads being backed up to your Backup vault. -## Create a Backup vault --A Backup vault is a management entity that stores recovery points created over time and provides an interface to perform backup related operations. These include taking on-demand backups, performing restores, and creating backup policies. --To create a Backup vault, follow these steps. --### Sign in to Azure --Sign in to the [Azure portal](https://portal.azure.com). --### Create Backup vault --1. Type **Backup vaults** in the search box. -2. Under **Services**, select **Backup vaults**. -3. On the **Backup vaults** page, select **Add**. -4. On the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose **Create new** resource group. Type *myResourceGroup* for the name. --  --5. Under **Instance details**, type *myVault* for the **Backup vault name** and choose your region of choice, in this case *East US* for your **Region**. -6. Now choose your **Storage redundancy**. Storage redundancy cannot be changed after protecting items to the vault. -7. We recommend that if you're using Azure as a primary backup storage endpoint, continue to use the default **Geo-redundant** setting. -8. If you don't use Azure as a primary backup storage endpoint, choose **Locally redundant**, which reduces the Azure storage costs. Learn more about [geo](../storage/common/storage-redundancy.md#geo-redundant-storage) and [local](../storage/common/storage-redundancy.md#locally-redundant-storage) redundancy. --  --9. Select the Review + create button at the bottom of the page. --  --## Delete a Backup vault --This section describes how to delete a Backup vault. It contains instructions for removing dependencies and then deleting a vault. --### Before you start --You can't delete a Backup vault with any of the following dependencies: --- You can't delete a vault that contains protected data sources (for example, Azure database for PostgreSQL servers).-- You can't delete a vault that contains backup data.--If you try to delete the vault without removing the dependencies, you'll encounter the following error messages: -->Cannot delete the Backup vault as there are existing backup instances or backup policies in the vault. Delete all backup instances and backup policies that are present in the vault and then try deleting the vault. --Ensure that you cycle through the **Datasource type** filter options in **Backup center** to not miss any existing Backup Instance or policy that needs to be removed, before being able to delete the Backup Vault. -- --### Proper way to delete a vault -->[!WARNING] ->The following operation is destructive and can't be undone. All backup data and backup items associated with the protected server will be permanently deleted. Proceed with caution. --To properly delete a vault, you must follow the steps in this order: --- Verify if there are any protected items:- - Go to **Backup Instances** in the left navigation bar. All items listed here must be deleted first. --After you've completed these steps, you can continue to delete the vault. --### Delete the Backup vault --When there are no more items in the vault, select **Delete** on the vault dashboard. You'll see a confirmation text asking if you want to delete the vault. -- --1. Select **Yes** to verify that you want to delete the vault. The vault is deleted. The portal returns to the **New** service menu. --## Monitor and manage the Backup vault --This section explains how to use the Backup vault **Overview** dashboard to monitor and manage your Backup vaults. The overview pane contains two tiles: **Jobs** and **Instances**. -- --### Manage Backup instances --In the **Jobs** tile, you get a summarized view of all backup and restore related jobs in your Backup vault. Selecting any of the numbers in this tile allows you to view more information on jobs for a particular datasource type, operation type, and status. -- --### Manage Backup jobs --In the **Backup Instances** tile, you get a summarized view of all backup instances in your Backup vault. Selecting any of the numbers in this tile allows you to view more information on backup instances for a particular datasource type and protection status. -- --## Move a Backup vault across Azure subscriptions/resource groups --This section explains how to move a Backup vault (configured for Azure Backup) across Azure subscriptions and resource groups using the Azure portal. -->[!Note] ->You can also move Backup vaults to a different resource group or subscription using [PowerShell](/powershell/module/az.resources/move-azresource) and [CLI](/cli/azure/resource#az-resource-move). --### Supported regions --The vault move across subscriptions and resource groups is supported in all public and national regions. --### Use Azure portal to move Backup vault to a different resource group --1. Sign in to the [Azure portal](https://portal.azure.com/). --1. Open the list of Backup vaults and select the vault you want to move. -- The vault dashboard displays the vault details. -- :::image type="content" source="./media/backup-vault-overview/vault-dashboard-to-move-to-resource-group-inline.png" alt-text="Screenshot showing the dashboard of the vault to be moved to another resource group." lightbox="./media/backup-vault-overview/vault-dashboard-to-move-to-resource-group-expanded.png"::: --1. In the vault **Overview** menu, click **Move**, and then select **Move to another resource group**. -- :::image type="content" source="./media/backup-vault-overview/select-move-to-another-resource-group-inline.png" alt-text="Screenshot showing the option for moving the Backup vault to another resource group." lightbox="./media/backup-vault-overview/select-move-to-another-resource-group-expanded.png"::: - >[!Note] - >Only the admin subscription has the required permissions to move a vault. --1. In the **Resource group** drop-down list, select an existing resource group or select **Create new** to create a new resource group. -- The subscription remains the same and gets auto-populated. -- :::image type="content" source="./media/backup-vault-overview/select-existing-or-create-resource-group-inline.png" alt-text="Screenshot showing the selection of an existing resource group or creation of a new resource group." lightbox="./media/backup-vault-overview/select-existing-or-create-resource-group-expanded.png"::: --1. On the **Resources to move** tab, the Backup vault that needs to be moved will undergo validation. This process may take a few minutes. Wait till the validation is complete. -- :::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-inline.png" alt-text="Screenshot showing the Backup vault validation status." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-expanded.png"::: --1. Select the checkbox **I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs** to confirm, and then select **Move**. - - >[!Note] - >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes. --Wait till the move operation is complete to perform any other operations on the vault. Any operations performed on the Backup vault will fail if performed while move is in progress. When the process is complete, the Backup vault should appear in the target resource group. -->[!Important] ->If you encounter any error while moving the vault, refer to the [Error codes and troubleshooting section](#error-codes-and-troubleshooting). --### Use Azure portal to move Backup vault to a different subscription --1. Sign in to the [Azure portal](https://portal.azure.com/). --1. Open the list of Backup vaults and select the vault you want to move. - - The vault dashboard displays the vault details. -- :::image type="content" source="./media/backup-vault-overview/vault-dashboard-to-move-to-another-subscription-inline.png" alt-text="Screenshot showing the dashboard of the vault to be moved to another Azure subscription." lightbox="./media/backup-vault-overview/vault-dashboard-to-move-to-another-subscription-expanded.png"::: --1. In the vault **Overview** menu, click **Move**, and then select **Move to another subscription**. -- :::image type="content" source="./media/backup-vault-overview/select-move-to-another-subscription-inline.png" alt-text="Screenshot showing the option for moving the Backup vault to another Azure subscription." lightbox="./media/backup-vault-overview/select-move-to-another-subscription-expanded.png"::: - >[!Note] - >Only the admin subscription has the required permissions to move a vault. --1. In the **Subscription** drop-down list, select an existing subscription. -- For moving vaults across subscriptions, the target subscription must reside in the same tenant as the source subscription. To move a vault to a different tenant, see [Transfer subscription to a different directory](../role-based-access-control/transfer-subscription.md). --1. In the **Resource group** drop-down list, select an existing resource group or select **Create new** to create a new resource group. -- :::image type="content" source="./media/backup-vault-overview/select-existing-or-create-resource-group-to-move-to-other-subscription-inline.png" alt-text="Screenshot showing the selection of an existing resource group or creation of a new resource group in another Azure subscription." lightbox="./media/backup-vault-overview/select-existing-or-create-resource-group-to-move-to-other-subscription-expanded.png"::: --1. On the **Resources to move** tab, the Backup vault that needs to be moved will undergo validation. This process may take a few minutes. Wait till the validation is complete. -- :::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-inline.png" alt-text="Screenshot showing the validation status of Backup vault to be moved to another Azure subscription." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-expanded.png"::: --1. Select the checkbox **I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs** to confirm, and then select **Move**. - - >[!Note] - >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes. --Wait till the move operation is complete to perform any other operations on the vault. Any operations performed on the Backup vault will fail if performed while move is in progress. When the process completes, the Backup vault should appear in the target Subscription and Resource group. -->[!Important] ->If you encounter any error while moving the vault, refer to the [Error codes and troubleshooting section](#error-codes-and-troubleshooting). ----### Error codes and troubleshooting --Troubleshoot the following common issues you might encounter during Backup vault move: --#### BackupVaultMoveResourcesPartiallySucceeded --**Cause**: You may face this error when Backup vault move succeeds only partially. --**Recommendation**: The issue should get resolved automatically within 36 hours. If it persists, contact Microsoft Support. --#### BackupVaultMoveResourcesCriticalFailure --**Cause**: You may face this error when Backup vault move fails critically. --**Recommendation**: The issue should get resolved automatically within 36 hours. If it persists, contact Microsoft Support. --#### UserErrorBackupVaultResourceMoveInProgress --**Cause**: You may face this error if you try to perform any operations on the Backup vault while itΓÇÖs being moved. --**Recommendation**: Wait till the move operation is complete, and then retry. --#### UserErrorBackupVaultResourceMoveNotAllowedForMultipleResources --**Cause**: You may face this error if you try to move multiple Backup vaults in a single attempt. --**Recommentation**: Ensure that only one Backup vault is selected for every move operation. --#### UserErrorBackupVaultResourceMoveNotAllowedUntilResourceProvisioned --**Cause**: You may face this error if the vault is not yet provisioned. --**Recommendation**: Retry the operation after some time. --#### BackupVaultResourceMoveIsNotEnabled --**Cause**: Resource move for Backup vault is currently not supported in the selected Azure region. --**Recommendation**: Ensure that you've selected one of the supported regions to move Backup vaults. See [Supported regions](#supported-regions). --#### UserErrorCrossTenantMSIMoveNotSupported --**Cause**: This error occurs if the subscription with which resource is associated has moved to a different Tenant, but the Managed Identity is still associated with the old Tenant. --**Recommendation**: Remove the Managed Identity from the existing Tenant; move the resource and add it again to the new one. - ## Cross Region Restore support for PostgreSQL using Azure Backup (preview) Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region. However, Cross Region Restore enables you to access and perform restores from the secondary region recovery points even when no outage occurs in the primary region; thus, enables you to perform drills to assess regional resiliency. +Learn [how to perform Cross Region Restore](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal-preview). + >[!Note] >- Cross Region Restore is now available for PostgreSQL backups protected in Backup vaults. >- Backup vaults enabled with Cross Region Restore will be automatically charged at [RA-GRS rates](https://azure.microsoft.com/pricing/details/backup/) for the PostgreSQL backups stored in the vault once the feature is generally available. -### Perform Cross Region Restore using Azure portal --Follow these steps: --1. Sign in to [Azure portal](https://portal.azure.com/). --1. [Create a new Backup vault](backup-vault-overview.md#create-backup-vault) or choose an existing Backup vault, and then enable Cross Region Restore by going to **Properties** > **Cross Region Restore (Preview)**, and choose **Enable**. -- :::image type="content" source="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png" alt-text="Screenshot shows how to enable Cross Region Restore for PostgreSQL database." lightbox="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png"::: --1. Go to the Backup vaultΓÇÖs **Overview** pane, and then [configure a backup for PostgreSQL database](backup-azure-database-postgresql.md). --1. Once the backup is complete in the primary region, it can take up to *12 hours* for the recovery point in the primary region to get replicated to the secondary region. -- To check the availability of recovery point in the secondary region, go to the **Backup center** > **Backup Instances** > **Filter to Azure Database for PostgreSQL servers**, filter **Instance Region** as *Secondary Region*, and then select the required Backup Instance. -- :::image type="content" source="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png" alt-text="Screenshot shows how to check availability for the recovery points in the secondary region." lightbox="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png"::: --1. The recovery points available in the secondary region are now listed. -- Choose **Restore to secondary region**. -- :::image type="content" source="./media/backup-vault-overview/initiate-restore-to-secondary-region.png" alt-text="Screenshot shows how to initiate restores to the secondary region." lightbox="./media/backup-vault-overview/initiate-restore-to-secondary-region.png"::: -- You can also trigger restores from the respective backup instance. -- :::image type="content" source="./media/backup-vault-overview/trigger-restores-from-respective-backup-instance.png" alt-text="Screenshot shows how to trigger restores from the respective backup instance." lightbox="./media/backup-vault-overview/trigger-restores-from-respective-backup-instance.png"::: --1. Select **Restore to secondary region** to review the target region selected, and then select the appropriate recovery point and restore parameters. --1. Once the restore starts, you can monitor the completion of the restore operation under **Backup Jobs** of the Backup vault by filtering **Jobs workload type** to *Azure Database for PostgreSQL servers* and **Instance Region** to *Secondary Region*. -- :::image type="content" source="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png" alt-text="Screenshot shows how to monitor the postgresql restore to the secondary region." lightbox="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png"::: - ## Next steps -- [Configure backup on Azure PostgreSQL databases](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases)+- [Create and manage Backup vault](create-manage-backup-vault.md) |
backup | Blob Backup Configure Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-manage.md | A [Backup vault](backup-vault-overview.md) is a management entity that stores re >[!NOTE] >The Backup vault is a new resource that is used for backing up new supported workloads and is different from the already existing Recovery Services vault. -For instructions on how to create a Backup vault, see the [Backup vault documentation](backup-vault-overview.md#create-a-backup-vault). +For instructions on how to create a Backup vault, see the [Backup vault documentation](create-manage-backup-vault.md#create-a-backup-vault). ## Grant permissions to the Backup vault on storage accounts |
backup | Create Manage Backup Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/create-manage-backup-vault.md | + + Title: Create and manage Backup vaults +description: Learn how to create and manage the Backup vaults. + Last updated : 07/05/2023++++++# Create and manage Backup vaults ++This article describes how to create Backup vaults and manage them. ++A Backup vault is a storage entity in Azure that houses backup data for certain newer workloads that Azure Backup supports. You can use Backup vaults to hold backup data for various Azure services, such Azure Database for PostgreSQL servers and newer workloads that Azure Backup will support. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides features such as: ++- **Enhanced capabilities to help secure backup data**: With Backup vaults, Azure Backup provides security capabilities to protect cloud backups. The security features ensure you can secure your backups, and safely recover data, even if production and backup servers are compromised. [Learn more](backup-azure-security-feature.md) ++- **Azure role-based access control (Azure RBAC)**: Azure RBAC provides fine-grained access management control in Azure. [Azure provides various built-in roles](../role-based-access-control/built-in-roles.md), and Azure Backup has three [built-in roles to manage recovery points](backup-rbac-rs-vault.md). Backup vaults are compatible with Azure RBAC, which restricts backup and restore access to the defined set of user roles. [Learn more](backup-rbac-rs-vault.md) ++++## Create a Backup vault ++A Backup vault is a management entity that stores recovery points created over time and provides an interface to perform backup related operations. These include taking on-demand backups, performing restores, and creating backup policies. ++To create a Backup vault, follow these steps. ++### Sign in to Azure ++Sign in to the [Azure portal](https://portal.azure.com). ++### Create Backup vault ++1. Type **Backup vaults** in the search box. +2. Under **Services**, select **Backup vaults**. +3. On the **Backup vaults** page, select **Add**. +4. On the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose **Create new** resource group. Type *myResourceGroup* for the name. ++  ++5. Under **Instance details**, type *myVault* for the **Backup vault name** and choose your region of choice, in this case *East US* for your **Region**. +6. Now choose your **Storage redundancy**. Storage redundancy cannot be changed after protecting items to the vault. +7. We recommend that if you're using Azure as a primary backup storage endpoint, continue to use the default **Geo-redundant** setting. +8. If you don't use Azure as a primary backup storage endpoint, choose **Locally redundant**, which reduces the Azure storage costs. Learn more about [geo](../storage/common/storage-redundancy.md#geo-redundant-storage) and [local](../storage/common/storage-redundancy.md#locally-redundant-storage) redundancy. ++  ++9. Select the Review + create button at the bottom of the page. ++  ++## Delete a Backup vault ++This section describes how to delete a Backup vault. It contains instructions for removing dependencies and then deleting a vault. ++### Before you start ++You can't delete a Backup vault with any of the following dependencies: ++- You can't delete a vault that contains protected data sources (for example, Azure database for PostgreSQL servers). +- You can't delete a vault that contains backup data. ++If you try to delete the vault without removing the dependencies, you'll encounter the following error messages: ++>Cannot delete the Backup vault as there are existing backup instances or backup policies in the vault. Delete all backup instances and backup policies that are present in the vault and then try deleting the vault. ++Ensure that you cycle through the **Datasource type** filter options in **Backup center** to not miss any existing Backup Instance or policy that needs to be removed, before being able to delete the Backup Vault. ++ ++### Proper way to delete a vault ++>[!WARNING] +>The following operation is destructive and can't be undone. All backup data and backup items associated with the protected server will be permanently deleted. Proceed with caution. ++To properly delete a vault, you must follow the steps in this order: ++- Verify if there are any protected items: + - Go to **Backup Instances** in the left navigation bar. All items listed here must be deleted first. ++After you've completed these steps, you can continue to delete the vault. ++### Delete the Backup vault ++When there are no more items in the vault, select **Delete** on the vault dashboard. You'll see a confirmation text asking if you want to delete the vault. ++ ++1. Select **Yes** to verify that you want to delete the vault. The vault is deleted. The portal returns to the **New** service menu. ++## Monitor and manage the Backup vault ++This section explains how to use the Backup vault **Overview** dashboard to monitor and manage your Backup vaults. The overview pane contains two tiles: **Jobs** and **Instances**. ++ ++### Manage Backup instances ++In the **Jobs** tile, you get a summarized view of all backup and restore related jobs in your Backup vault. Selecting any of the numbers in this tile allows you to view more information on jobs for a particular datasource type, operation type, and status. ++ ++### Manage Backup jobs ++In the **Backup Instances** tile, you get a summarized view of all backup instances in your Backup vault. Selecting any of the numbers in this tile allows you to view more information on backup instances for a particular datasource type and protection status. ++ ++## Move a Backup vault across Azure subscriptions/resource groups ++This section explains how to move a Backup vault (configured for Azure Backup) across Azure subscriptions and resource groups using the Azure portal. ++>[!Note] +>You can also move Backup vaults to a different resource group or subscription using [PowerShell](/powershell/module/az.resources/move-azresource) and [CLI](/cli/azure/resource#az-resource-move). ++### Supported regions ++The vault move across subscriptions and resource groups is supported in all public and national regions. ++### Use Azure portal to move Backup vault to a different resource group ++1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Open the list of Backup vaults and select the vault you want to move. ++ The vault dashboard displays the vault details. ++ :::image type="content" source="./media/backup-vault-overview/vault-dashboard-to-move-to-resource-group-inline.png" alt-text="Screenshot showing the dashboard of the vault to be moved to another resource group." lightbox="./media/backup-vault-overview/vault-dashboard-to-move-to-resource-group-expanded.png"::: ++1. In the vault **Overview** menu, click **Move**, and then select **Move to another resource group**. ++ :::image type="content" source="./media/backup-vault-overview/select-move-to-another-resource-group-inline.png" alt-text="Screenshot showing the option for moving the Backup vault to another resource group." lightbox="./media/backup-vault-overview/select-move-to-another-resource-group-expanded.png"::: + >[!Note] + >Only the admin subscription has the required permissions to move a vault. ++1. In the **Resource group** drop-down list, select an existing resource group or select **Create new** to create a new resource group. ++ The subscription remains the same and gets auto-populated. ++ :::image type="content" source="./media/backup-vault-overview/select-existing-or-create-resource-group-inline.png" alt-text="Screenshot showing the selection of an existing resource group or creation of a new resource group." lightbox="./media/backup-vault-overview/select-existing-or-create-resource-group-expanded.png"::: ++1. On the **Resources to move** tab, the Backup vault that needs to be moved will undergo validation. This process may take a few minutes. Wait till the validation is complete. ++ :::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-inline.png" alt-text="Screenshot showing the Backup vault validation status." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-expanded.png"::: ++1. Select the checkbox **I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs** to confirm, and then select **Move**. + + >[!Note] + >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes. ++Wait till the move operation is complete to perform any other operations on the vault. Any operations performed on the Backup vault will fail if performed while move is in progress. When the process is complete, the Backup vault should appear in the target resource group. ++>[!Important] +>If you encounter any error while moving the vault, refer to the [Error codes and troubleshooting section](#error-codes-and-troubleshooting). ++### Use Azure portal to move Backup vault to a different subscription ++1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Open the list of Backup vaults and select the vault you want to move. + + The vault dashboard displays the vault details. ++ :::image type="content" source="./media/backup-vault-overview/vault-dashboard-to-move-to-another-subscription-inline.png" alt-text="Screenshot showing the dashboard of the vault to be moved to another Azure subscription." lightbox="./media/backup-vault-overview/vault-dashboard-to-move-to-another-subscription-expanded.png"::: ++1. In the vault **Overview** menu, click **Move**, and then select **Move to another subscription**. ++ :::image type="content" source="./media/backup-vault-overview/select-move-to-another-subscription-inline.png" alt-text="Screenshot showing the option for moving the Backup vault to another Azure subscription." lightbox="./media/backup-vault-overview/select-move-to-another-subscription-expanded.png"::: + >[!Note] + >Only the admin subscription has the required permissions to move a vault. ++1. In the **Subscription** drop-down list, select an existing subscription. ++ For moving vaults across subscriptions, the target subscription must reside in the same tenant as the source subscription. To move a vault to a different tenant, see [Transfer subscription to a different directory](../role-based-access-control/transfer-subscription.md). ++1. In the **Resource group** drop-down list, select an existing resource group or select **Create new** to create a new resource group. ++ :::image type="content" source="./media/backup-vault-overview/select-existing-or-create-resource-group-to-move-to-other-subscription-inline.png" alt-text="Screenshot showing the selection of an existing resource group or creation of a new resource group in another Azure subscription." lightbox="./media/backup-vault-overview/select-existing-or-create-resource-group-to-move-to-other-subscription-expanded.png"::: ++1. On the **Resources to move** tab, the Backup vault that needs to be moved will undergo validation. This process may take a few minutes. Wait till the validation is complete. ++ :::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-inline.png" alt-text="Screenshot showing the validation status of Backup vault to be moved to another Azure subscription." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-expanded.png"::: ++1. Select the checkbox **I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs** to confirm, and then select **Move**. + + >[!Note] + >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes. ++Wait till the move operation is complete to perform any other operations on the vault. Any operations performed on the Backup vault will fail if performed while move is in progress. When the process completes, the Backup vault should appear in the target Subscription and Resource group. ++>[!Important] +>If you encounter any error while moving the vault, refer to the [Error codes and troubleshooting section](#error-codes-and-troubleshooting). ++++### Error codes and troubleshooting ++Troubleshoot the following common issues you might encounter during Backup vault move: ++#### BackupVaultMoveResourcesPartiallySucceeded ++**Cause**: You may face this error when Backup vault move succeeds only partially. ++**Recommendation**: The issue should get resolved automatically within 36 hours. If it persists, contact Microsoft Support. ++#### BackupVaultMoveResourcesCriticalFailure ++**Cause**: You may face this error when Backup vault move fails critically. ++**Recommendation**: The issue should get resolved automatically within 36 hours. If it persists, contact Microsoft Support. ++#### UserErrorBackupVaultResourceMoveInProgress ++**Cause**: You may face this error if you try to perform any operations on the Backup vault while itΓÇÖs being moved. ++**Recommendation**: Wait till the move operation is complete, and then retry. ++#### UserErrorBackupVaultResourceMoveNotAllowedForMultipleResources ++**Cause**: You may face this error if you try to move multiple Backup vaults in a single attempt. ++**Recommentation**: Ensure that only one Backup vault is selected for every move operation. ++#### UserErrorBackupVaultResourceMoveNotAllowedUntilResourceProvisioned ++**Cause**: You may face this error if the vault is not yet provisioned. ++**Recommendation**: Retry the operation after some time. ++#### BackupVaultResourceMoveIsNotEnabled ++**Cause**: Resource move for Backup vault is currently not supported in the selected Azure region. ++**Recommendation**: Ensure that you've selected one of the supported regions to move Backup vaults. See [Supported regions](#supported-regions). ++#### UserErrorCrossTenantMSIMoveNotSupported ++**Cause**: This error occurs if the subscription with which resource is associated has moved to a different Tenant, but the Managed Identity is still associated with the old Tenant. ++**Recommendation**: Remove the Managed Identity from the existing Tenant; move the resource and add it again to the new one. ++## Perform Cross Region Restore using Azure portal (preview) ++Follow these steps: ++1. Sign in to [Azure portal](https://portal.azure.com/). ++1. [Create a new Backup vault](create-manage-backup-vault.md#create-backup-vault) or choose an existing Backup vault, and then enable Cross Region Restore by going to **Properties** > **Cross Region Restore (Preview)**, and choose **Enable**. ++ :::image type="content" source="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png" alt-text="Screenshot shows how to enable Cross Region Restore for PostgreSQL database." lightbox="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png"::: ++1. Go to the Backup vaultΓÇÖs **Overview** pane, and then [configure a backup for PostgreSQL database](backup-azure-database-postgresql.md). ++1. Once the backup is complete in the primary region, it can take up to *12 hours* for the recovery point in the primary region to get replicated to the secondary region. ++ To check the availability of recovery point in the secondary region, go to the **Backup center** > **Backup Instances** > **Filter to Azure Database for PostgreSQL servers**, filter **Instance Region** as *Secondary Region*, and then select the required Backup Instance. ++ :::image type="content" source="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png" alt-text="Screenshot shows how to check availability for the recovery points in the secondary region." lightbox="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png"::: ++1. The recovery points available in the secondary region are now listed. ++ Choose **Restore to secondary region**. ++ :::image type="content" source="./media/backup-vault-overview/initiate-restore-to-secondary-region.png" alt-text="Screenshot shows how to initiate restores to the secondary region." lightbox="./media/backup-vault-overview/initiate-restore-to-secondary-region.png"::: ++ You can also trigger restores from the respective backup instance. ++ :::image type="content" source="./media/backup-vault-overview/trigger-restores-from-respective-backup-instance.png" alt-text="Screenshot shows how to trigger restores from the respective backup instance." lightbox="./media/backup-vault-overview/trigger-restores-from-respective-backup-instance.png"::: ++1. Select **Restore to secondary region** to review the target region selected, and then select the appropriate recovery point and restore parameters. ++1. Once the restore starts, you can monitor the completion of the restore operation under **Backup Jobs** of the Backup vault by filtering **Jobs workload type** to *Azure Database for PostgreSQL servers* and **Instance Region** to *Secondary Region*. ++ :::image type="content" source="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png" alt-text="Screenshot shows how to monitor the postgresql restore to the secondary region." lightbox="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png"::: ++## Next steps ++- [Configure backup on Azure PostgreSQL databases](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases) |
backup | Tutorial Postgresql Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-postgresql-backup.md | A Backup vault is a storage entity in Azure that holds backup data for various n 1. On the **Basics** tab, provide subscription, resource group, backup vault name, region, and backup storage redundancy. - Continue by selecting **Review + create**. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault). + Continue by selecting **Review + create**. Learn more about [creating a Backup vault](./create-manage-backup-vault.md#create-a-backup-vault). :::image type="content" source="./media/backup-managed-disks/review-and-create.png" alt-text="Screenshot showing to select Review and create vault."::: |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 06/25/2023 Last updated : 07/05/2023 You can learn more about the new releases by bookmarking this page or by [subscr ## Updates summary +- July 2023 + - [Cross Region Restore for PostgreSQL (preview)](#cross-region-restore-for-postgresql-preview) - April 2023 - [Microsoft Azure Backup Server v4 is now generally available](#microsoft-azure-backup-server-v4-is-now-generally-available) - March 2023 You can learn more about the new releases by bookmarking this page or by [subscr - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) +## Cross Region Restore for PostgreSQL (preview) ++Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region. ++For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup (preview)](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup-preview). + ## Microsoft Azure Backup Server v4 is now generally available Azure Backup now provides Microsoft Azure Backup Server (MABS) v4, the latest edition of on-premises backup solution. Azure Backup now provides Microsoft Azure Backup Server (MABS) v4, the latest ed - It can *protect* and *run on* Windows Server 2022, Azure Stack HCI 22H2, vSphere 8.0, and SQL Server 2022. - It contains stability improvements and bug fixes on *MABS v3 UR2*. -For more information see [What's new in MABS](backup-mabs-whats-new-mabs.md). +For more information, see [What's new in MABS](backup-mabs-whats-new-mabs.md). ## Multiple backups per day for Azure VMs is now generally available Azure Backup now enables you to create a backup policy to take multiple backups a day. With this capability, you can also define the duration in which your backup jobs would trigger and align your backup schedule with the working hours when there are frequent updates to Azure Virtual Machines. |
cognitive-services | Concept Describing Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describing-images.md | The following JSON response illustrates what the Analyze API returns when descri ## Use the API - The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section. * [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp) |
cognitive-services | Concept Face Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md | Try out the capabilities of face detection quickly and easily using Vision Studi ## Face ID -The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call. +The face ID is a unique identifier string for each detected face in an image. Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call. ## Face landmarks The Detection_03 model currently has the most accurate landmark detection. The e Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected: -* **Accessories**. Whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory. +* **Accessories**. Indicates whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory. * **Age**. The estimated age in years of a particular face. * **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high. * **Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear. Attributes are a set of features that can optionally be detected by the [Face -  - For more details on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md). -* **Makeup**. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup. -* **Mask**. Whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered. + For more information on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md). +* **Makeup**. Indicates whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup. +* **Mask**. Indicates whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered. * **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.-* **Occlusion**. Whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded. +* **Occlusion**. Indicates whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded. * **Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile. * **QualityForRecognition** The overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment, and quality at or above "medium" is recommended for identification scenarios. >[!NOTE] Use the following tips to make sure that your input images give the most accurat * The supported input image formats are JPEG, PNG, GIF (the first frame), BMP. * The image file size should be no larger than 6 MB.-* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they are larger than the minimum detectable face size. +* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they're larger than the minimum detectable face size. * The maximum detectable face size is 4096 x 4096 pixels. * Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected. * Some faces might not be recognized because of technical challenges, such as: Use the following tips to make sure that your input images give the most accurat ### Input data with orientation information: -Some input images with JPEG format might contain orientation information in Exchangeable image file format (Exif) metadata. If Exif orientation is available, images will be automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face will be estimated based on the rotated image. +Some input images with JPEG format might contain orientation information in Exchangeable image file format (EXIF) metadata. If EXIF orientation is available, images are automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face are estimated based on the rotated image. -To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of image visualization tools will auto-rotate the image according to its Exif orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right). +To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of the image visualization tools automatically rotate the image according to its EXIF orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).  If you're detecting faces from a video feed, you may be able to improve performa * **Smoothing**: Many video cameras apply a smoothing effect. You should turn this off if you can because it creates a blur between frames and reduces clarity. * **Shutter Speed**: A faster shutter speed reduces the amount of motion between frames and makes each frame clearer. We recommend shutter speeds of 1/60 second or faster.-* **Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This will result in clearer video frames. +* **Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This results in clearer video frames. >[!NOTE] > A camera with a lower shutter angle will receive less light in each frame, so the image will be darker. You'll need to determine the right level to use. |
cognitive-services | Concept Ocr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-ocr.md | -OCR traditionally started as a machine-learning based technique for extracting text from in-the-wild and non-document images like product labels, user generated images, screenshots, street signs, and posters. For several scenarios that including running OCR on single images that are not text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times. +OCR traditionally started as a machine-learning-based technique for extracting text from in-the-wild and non-document images like product labels, user-generated images, screenshots, street signs, and posters. For several scenarios, such as single images that aren't text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times. -## What is Computer Vision v4.0 Read OCR (preview) +## What is Computer Vision v4.0 Read OCR (preview)? The new Computer Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md). |
cognitive-services | Overview Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md | As with all of the Cognitive Services resources, developers who use the Face ser Follow a quickstart to code the basic components of a face recognition app in the language of your choice. -- [Client library quickstart](quickstarts-sdk/identity-client-library.md).+- [Face quickstart](quickstarts-sdk/identity-client-library.md). |
cognitive-services | Overview Image Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md | For a more structured approach, follow a Training module for Image Analysis. ## Analyze Image -You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the Analyze Image API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to get started. +You can analyze images to provide insights about their visual features and characteristics. All of the features in this list are provided by the Analyze Image API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to get started. | Name | Description | Concept page | ||||-|**Model customization** (v4.0 preview only)|You can create and train custom models to do image classification or object detection. Bring your own images, label them with custom tags, and Image Analysis will train a model customized for your use case.|[Model customization](./concept-model-customization.md)| +|**Model customization** (v4.0 preview only)|You can create and train custom models to do image classification or object detection. Bring your own images, label them with custom tags, and Image Analysis trains a model customized for your use case.|[Model customization](./concept-model-customization.md)| |**Read text from images** (v4.0 preview only)| Version 4.0 preview of Image Analysis offers the ability to extract readable text from images. Compared with the async Computer Vision 3.2 Read API, the new version offers the familiar Read OCR engine in a unified performance-enhanced synchronous API that makes it easy to get OCR along with other insights in a single API call. |[OCR for images](concept-ocr.md)| |**Detect people in images** (v4.0 preview only)|Version 4.0 preview of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. |[People detection](concept-people-detection.md)|-|**Generate image captions** | Generate a caption of an image in human-readable language, using complete sentences. Computer Vision's algorithms generate captions based on the objects identified in the image. <br/><br/>The version 4.0 image captioning model is a more advanced implementation and works with a wider range of input images. It is only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. <br/><br/>Version 4.0 also lets you use dense captioning, which generates detailed captions for individual objects that are found in the image. The API returns the bounding box coordinates (in pixels) of each object found in the image, plus a caption. You can use this functionality to generate descriptions of separate parts of an image.<br/><br/>:::image type="content" source="Images/description.png" alt-text="Photo of cows with a simple description on the right.":::| [Generate image captions (v3.2)](concept-describing-images.md)<br/>[(v4.0 preview)](concept-describe-images-40.md)| -|**Detect objects** |Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process further relationships between the objects in an image. It also lets you know when there are multiple instances of the same tag in an image. <br/><br/>:::image type="content" source="Images/detect-objects.png" alt-text="Photo of an office with a rectangle drawn around a laptop.":::| [Detect objects (v3.2)](concept-object-detection.md)<br/>[(v4.0 preview)](concept-object-detection-40.md) +|**Generate image captions** | Generate a caption of an image in human-readable language, using complete sentences. Computer Vision's algorithms generate captions based on the objects identified in the image. <br/><br/>The version 4.0 image captioning model is a more advanced implementation and works with a wider range of input images. It's only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. <br/><br/>Version 4.0 also lets you use dense captioning, which generates detailed captions for individual objects that are found in the image. The API returns the bounding box coordinates (in pixels) of each object found in the image, plus a caption. You can use this functionality to generate descriptions of separate parts of an image.<br/><br/>:::image type="content" source="Images/description.png" alt-text="Photo of cows with a simple description on the right.":::| [Generate image captions (v3.2)](concept-describing-images.md)<br/>[(v4.0 preview)](concept-describe-images-40.md)| +|**Detect objects** |Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation lists those objects together with their coordinates in the image. You can use this functionality to process further relationships between the objects in an image. It also lets you know when there are multiple instances of the same tag in an image. <br/><br/>:::image type="content" source="Images/detect-objects.png" alt-text="Photo of an office with a rectangle drawn around a laptop.":::| [Detect objects (v3.2)](concept-object-detection.md)<br/>[(v4.0 preview)](concept-object-detection-40.md) |**Tag visual features**| Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.<br/><br/>:::image type="content" source="Images/tagging.png" alt-text="Photo of a skateboarder with tags listed on the right.":::|[Tag visual features (v3.2)](concept-tagging-images.md)<br/>[(v4.0 preview)](concept-tag-images-40.md)|-|**Get the area of interest / smart crop** |Analyze the contents of an image to return the coordinates of the *area of interest* that matches a specified aspect ratio. Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. <br/><br/>The version 4.0 smart cropping model is a more advanced implementation and works with a wider range of input images. It is only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. | [Generate a thumbnail (v3.2)](concept-generating-thumbnails.md)<br/>[(v4.0 preview)](concept-generate-thumbnails-40.md)| +|**Get the area of interest / smart crop** |Analyze the contents of an image to return the coordinates of the *area of interest* that matches a specified aspect ratio. Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. <br/><br/>The version 4.0 smart cropping model is a more advanced implementation and works with a wider range of input images. It's only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. | [Generate a thumbnail (v3.2)](concept-generating-thumbnails.md)<br/>[(v4.0 preview)](concept-generate-thumbnails-40.md)| |**Detect brands** (v3.2 only) | Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement. |[Detect brands](concept-brand-detection.md)| |**Categorize an image** (v3.2 only)|Identify and categorize an entire image, using a [category taxonomy](Category-Taxonomy.md) with parent/child hereditary hierarchies. Categories can be used alone, or with our new tagging models.<br/><br/>Currently, English is the only supported language for tagging and categorizing images. |[Categorize an image](concept-categorizing-images.md)| | **Detect faces** (v3.2 only) |Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/><br/>You can also use the dedicated [Face API](./index-identity.yml) for these purposes. It provides more detailed analysis, such as facial identification and pose detection.|[Detect faces](concept-detecting-faces.md)| |
cognitive-services | Overview Ocr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md | -OCR or Optical Character Recognition is also referred to as text recognition or text extraction. Machine-learning based OCR techniques allow you to extract printed or handwritten text from images, such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. The text is typically extracted as words, text lines, and paragraphs or text blocks, enabling access to digital version of the scanned text. This eliminates or significantly reduces the need for manual data entry. +OCR or Optical Character Recognition is also referred to as text recognition or text extraction. Machine-learning-based OCR techniques allow you to extract printed or handwritten text from images such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. The text is typically extracted as words, text lines, and paragraphs or text blocks, enabling access to digital version of the scanned text. This eliminates or significantly reduces the need for manual data entry. ## How is OCR related to Intelligent Document Processing (IDP)? -Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Form Recognizer](../../applied-ai-services/form-recognizer/overview.md). Form Recognizer includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Form Recognizer Read OCR](../../applied-ai-services/form-recognizer/concept-read.md). +Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Form Recognizer](../../applied-ai-services/form-recognizer/overview.md). Form Recognizer includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you're extracting text from scanned and digital documents, use [Form Recognizer Read OCR](../../applied-ai-services/form-recognizer/concept-read.md). ## OCR engine-Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). This allows them to extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences. ++Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). It can extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences. > [!WARNING] > The Computer Vision legacy [OCR API in v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) and [RecognizeText API in v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) operations are not recomended for use. Try out OCR by using Vision Studio. Then follow one of the links to the Read edi ## OCR supported languages -Both **Read** versions available today in Computer Vision support several languages for printed and handwritten text. OCR for printed text includes support for English, French, German, Italian, Portuguese, Spanish, Chinese, Japanese, Korean, Russian, Arabic, Hindi, and other international languages that use Latin, Cyrillic, Arabic, and Devanagari scripts. OCR for handwritten text includes support for English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish languages. +Both **Read** versions available today in Computer Vision support several languages for printed and handwritten text. OCR for printed text supports English, French, German, Italian, Portuguese, Spanish, Chinese, Japanese, Korean, Russian, Arabic, Hindi, and other international languages that use Latin, Cyrillic, Arabic, and Devanagari scripts. OCR for handwritten text supports English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish languages. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr). |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md | Azure's Computer Vision service gives you access to advanced algorithms that pro | Service|Description| |||-| [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on a variety of surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.| +| [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on various surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.| |[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library-40.md) to get started.| | [Face](overview-identity.md) | The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Follow the [Face quickstart](quickstarts-sdk/identity-client-library.md) to get started. | | [Spatial Analysis](intro-to-spatial-analysis-public-preview.md)| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.| Computer Vision can analyze images that meet the following requirements: - The image must be presented in JPEG, PNG, GIF, or BMP format - The file size of the image must be less than 4 megabytes (MB) - The dimensions of the image must be greater than 50 x 50 pixels- - For the Read API, the dimensions of the image must be between 50 x 50 and 10000 x 10000 pixels. + - For the Read API, the dimensions of the image must be between 50 x 50 and 10,000 x 10,000 pixels. ## Data privacy and security |
cognitive-services | Client Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md | |
cognitive-services | Identity Client Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/identity-client-library.md | zone_pivot_groups: programming-languages-set-face Previously updated : 12/27/2022 Last updated : 07/04/2023 ms.devlang: csharp, golang, javascript, python |
cognitive-services | Logo Detector Mobile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/logo-detector-mobile.md | Title: "Tutorial: Use custom logo detector to recognize Azure services - Custom Vision" -description: In this tutorial, you will step through a sample app that uses Custom Vision as part of a logo detection scenario. Learn how Custom Vision is used with other components to deliver an end-to-end application. +description: In this tutorial, you'll step through a sample app that uses Custom Vision as part of a logo detection scenario. Learn how Custom Vision is used with other components to deliver an end-to-end application. -In this tutorial, you'll explore a sample app that uses Custom Vision as part of a larger scenario. The AI Visual Provision app, a Xamarin.Forms app for mobile platforms, analyzes camera pictures of Azure service logos and then deploys the actual services to the user's Azure account. Here you'll learn how it uses Custom Vision in coordination with other components to deliver a useful end-to-end application. You can run the whole app scenario for yourself, or you can complete only the Custom Vision part of the setup and explore how the app uses it. +In this tutorial, you'll explore a sample app that uses Custom Vision as part of a larger scenario. The AI Visual Provision app, a Xamarin.Forms application for mobile platforms, analyzes photos of Azure service logos and then deploys those services to the user's Azure account. Here you'll learn how it uses Custom Vision in coordination with other components to deliver a useful end-to-end application. You can run the whole app scenario for yourself, or you can complete only the Custom Vision part of the setup and explore how the app uses it. -This tutorial will show you how to: +This tutorial shows you how to: > [!div class="checklist"] > - Create a custom object detector to recognize Azure service logos. If you don't have an Azure subscription, create a [free account](https://azure.m ## Get the source code -If you want to use the provided web app, clone or download the app's source code from the [AI Visual Provision](https://github.com/Microsoft/AIVisualProvision) repository on GitHub. Open the *Source/VisualProvision.sln* file in Visual Studio. Later on, you'll edit some of the project files so you can run the app. +If you want to use the provided web app, clone or download the app's source code from the [AI Visual Provision](https://github.com/Microsoft/AIVisualProvision) repository on GitHub. Open the *Source/VisualProvision.sln* file in Visual Studio. Later, you'll edit some of the project files so you can run the app yourself. ## Create an object detector -Sign in to the [Custom Vision website](https://customvision.ai/) and create a new project. Specify an Object Detection project and use the Logo domain; this will let the service use an algorithm optimized for logo detection. +Sign in to the [Custom Vision web portal](https://customvision.ai/) and create a new project. Specify an Object Detection project and use the Logo domain; this will let the service use an algorithm optimized for logo detection.  This result takes the form of a **PredictionResult** instance, which itself cont To learn more about how the app handles this data, start with the **GetResourcesAsync** method. This method is defined in the *Source/VisualProvision/Services/Recognition/RecognitionService.cs* file. -## Add Computer Vision +## Add text recognition The Custom Vision portion of the tutorial is complete. If you want to run the app, you'll need to integrate the Computer Vision service as well. The app uses the Computer Vision text recognition feature to supplement the logo detection process. An Azure logo can be recognized by its appearance *or* by the text printed near it. Unlike Custom Vision models, Computer Vision is pretrained to perform certain operations on images or videos. |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md | Custom Vision functionality can be divided into two features. **[Image classific ### Optimization -The Custom Vision service is optimized to quickly recognize major differences between images, so you can start prototyping your model with a small amount of data. 50 images per label are generally a good start. However, the service is not optimal for detecting subtle differences in images (for example, detecting minor cracks or dents in quality assurance scenarios). +The Custom Vision service is optimized to quickly recognize major differences between images, so you can start prototyping your model with a small amount of data. It's generally a good start to use 50 images per label. However, the service isn't optimal for detecting subtle differences in images (for example, detecting minor cracks or dents in quality assurance scenarios). -Additionally, you can choose from several variations of the Custom Vision algorithm that are optimized for images with certain subject material—for example, landmarks or retail items. See [Select a domain](select-domain.md) for more information. +Additionally, you can choose from several variations of the Custom Vision algorithm that are optimized for images with certain subject material—for example, landmarks or retail items. For more information, see [Select a domain](select-domain.md). ## How to use it -The Custom Vision Service is available as a set of native SDKs as well as through a web-based interface on the [Custom Vision portal](https://customvision.ai/). You can create, test, and train a model through either interface or use both together. +The Custom Vision Service is available as a set of native SDKs and through a web-based interface on the [Custom Vision portal](https://customvision.ai/). You can create, test, and train a model through either interface or use both together. ### Supported browsers for Custom Vision web portal The Custom Vision portal can be used by the following web browsers: ## Backup and disaster recovery -As a part of Azure, Custom Vision Service has components that are maintained across multiple regions. Service zones and regions are used by all of our services to provide continued service to our customers. For more information on zones and regions, see [Azure regions](../../availability-zones/az-overview.md). If you need additional information or have any issues, please [contact support](/answers/topics/azure-custom-vision.html). +As a part of Azure, Custom Vision Service has components that are maintained across multiple regions. Service zones and regions are used by all of our services to provide continued service to our customers. For more information on zones and regions, see [Azure regions](../../availability-zones/az-overview.md). If you need additional information or have any issues, [contact support](/answers/topics/azure-custom-vision.html). ## Data privacy and security |
cognitive-services | Speech Container Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-overview.md | The following table lists the Speech containers available in the Microsoft Conta | Container | Features | Supported versions and locales | |--|--|--|-| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.14.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| -| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.14.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | +| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| +| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | | [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |-| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.13.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | +| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.14.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | <sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. <sup>2</sup> Not available as a disconnected container. |
cognitive-services | Cognitive Services Virtual Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md | Only IPV4 addresses are supported at this time. Each Cognitive Services resource To grant access from your on-premises networks to your Cognitive Services resource with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help. -If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you'll need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering) +If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering) ### Managing IP network rules When creating the private endpoint, you must specify the Cognitive Services reso Clients on a VNet using the private endpoint should use the same connection string for the Cognitive Services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Cognitive Services resource over a private link. -We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make additional changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints. +We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make more changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints. ### Private endpoints with the Speech Services See [Using Speech Services with private endpoints provided by Azure Private Link ### DNS changes for private endpoints -When you create a private endpoint, the DNS CNAME resource record for the Cognitive Services resource is updated to an alias in a subdomain with the prefix '*privatelink*'. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the '*privatelink*' subdomain, with the DNS A resource records for the private endpoints. +When you create a private endpoint, the DNS CNAME resource record for the Cognitive Services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Cognitive Services resource. When resolved from the VNet hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address. This approach enables access to the Cognitive Services resource using the same connection string for clients in the VNet hosting the private endpoints and clients outside the VNet. -If you are using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Cognitive Services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet. +If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Cognitive Services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet. > [!TIP] > When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Cognitive Services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records. -For more information on configuring your own DNS server to support private endpoints, refer to the following articles: +For more information on configuring your own DNS server to support private endpoints, see the following articles: * [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) * [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration) |
cognitive-services | Encrypt Data At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/how-to/encrypt-data-at-rest.md | -Azure Content Safety automatically encrypts your data when it's persisted to the cloud. The encryption protects your data and helps you meet your organizational security and compliance commitments. This article covers how Azure Content Safety handles encryption of data at rest. +Azure Content Safety automatically encrypts your data when it's uploaded to the cloud. The encryption protects your data and helps you meet your organizational security and compliance commitments. This article covers how Azure Content Safety handles encryption of data at rest. ## About Cognitive Services encryption Azure Content Safety is part of Azure Cognitive Services. Cognitive Services data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption. -> [!IMPORTANT] -> For blocklist name, only MMK encryption is applied by default. Using CMK or not will not change this behavior. All the other data will use either MMK or CMK depending on what you've selected. ## About encryption key management By default, your subscription uses Microsoft-managed encryption keys. There's also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. +> [!IMPORTANT] +> For blocklist names, only MMK encryption is applied by default. Using CMK or not will not change this behavior. All the other data will use either MMK or CMK depending on what you've selected. + ## Customer-managed keys with Azure Key Vault Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. |
cognitive-services | Abuse Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/abuse-monitoring.md | description: Learn about the abuse monitoring capabilities of Azure OpenAI Servi + Last updated 06/16/2023 |
cognitive-services | Advanced Prompt Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/advanced-prompt-engineering.md | description: Learn about the options for how to use prompt engineering with GPT- + Last updated 04/20/2023 |
cognitive-services | Content Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md | description: Learn about the content filtering capabilities of Azure OpenAI in A + Last updated 06/08/2023 |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | Title: Azure OpenAI Service models description: Learn about the different model capabilities that are available with Azure OpenAI. + Last updated 06/30/2023 |
cognitive-services | Prompt Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/prompt-engineering.md | Title: Azure OpenAI Service | Introduction to Prompt engineering description: Learn how to use prompt engineering to optimize your work with Azure OpenAI Service. + Last updated 03/21/2023 |
cognitive-services | Red Teaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/red-teaming.md | Title: Introduction to red teaming large language models (LLMs) description: Learn about how red teaming and adversarial testing is an essential practice in the responsible development of systems and features using large language models (LLMs) + Last updated 05/18/2023 |
cognitive-services | System Message | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/system-message.md | Title: System message framework and template recommendations for Large Language description: Learn about how to construct system messages also know as metaprompts to guide an AI system's behavior. + Last updated 05/19/2023 |
cognitive-services | Encrypt Data At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/encrypt-data-at-rest.md | |
cognitive-services | What Are Cognitive Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md | |
communication-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md | We've created this page to keep you updated on new features, blog posts, and oth <br> <br> -## From the community -See examples and get inspired by what's being done in the community of Azure Communication Services users. +## New features +Get detailed information on the latest Azure Communication Services feature launches. +### Call Automation and Call Recording +Use Azure Communication Services call automation to transform your customer experiences. Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and Public Switched Telephone Network(PSTN) channels. -### Extend Azure Communication Services with Power Platform Connectors +[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/build-2023-transforming-customer-experiences-with-automated-ai/ba-p/3827857) -Listen to Azure Communication Services PMs Tomas Chladek and David de Matheu talk about how to connect your Azure Communication Services app to Microsoft Teams, and extend it with the Microsoft Power Platform. +[Try out a sample](https://aka.ms/acs-ca-demo) -[Watch the video](https://www.youtube.com/watch?v=-TPI293h0mY&t=3s&pp=ygUcYXp1cmUgY29tbXVuaWNhdGlvbiBzZXJ2aWNlcw%3D%3D) +[Read the Call Automation conceptual docs](./concepts/call-automation/call-automation.md) +<br> +<br> -[Read the Power Pages documentation](https://learn.microsoft.com/power-pages/configure/component-framework) -[Read the tutorial on integrating with Teams](https://aka.ms/mscloud-acs-teams-tutorial) +### Virtual Rooms + Azure Communication Services Virtual Rooms is a new set of APIs that allows developers to control who can join a call, when they meet, and how they collaborate during group meetings. Azure Communication Services Rooms is now Generally Available. +[Read more about the Rooms APIs](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-virtual-rooms-is-now-generally/ba-p/3845412) ++[Try the virtual rooms quickstart](./quickstarts/rooms/join-rooms-call.md) ++[Read the virtual rooms conceptual documentation](./concepts/rooms/room-concept.md) <br> <br> -### Integrate Azure Communication Services calling into a React App --Learn how to create an app using Azure Communication services front-end components in React. -[Watch the video](https://www.youtube.com/watch?v=ZyBNYblzISs&pp=ygUcYXp1cmUgY29tbXVuaWNhdGlvbiBzZXJ2aWNlcw%3D%3D) +### Calling SDK for Windows -[View the Microsoft Cloud Integrations repo](https://github.com/microsoft/microsoftcloud) +Add first-class calling capabilities to your Windows applications. Use this SDK to natively create rich audio and video experiences in Windows for your customers, tailored to your specific needs and preferences. With the Calling SDK for Windows, you can implement real-time communication features such as voice and video calls, Microsoft Teams meeting integration, screen sharing, and raw media access. Azure Communication Services Calling SDK for Windows is now generally available. -[Read the tutorial on integrating with Teams](https://aka.ms/mscloud-acs-teams-tutorial) +[Read more about the calling SDK for Windows](https://techcommunity.microsoft.com/t5/azure-communication-services/add-first-class-calling-capabilities-to-your-windows/ba-p/3836731) -[Read more about the UI Library](https://aka.ms/acs-ui-library) +[Read the calling SDK overview](./concepts/voice-video-calling/calling-sdk-features.md) +[Check out the calling SDK quickstart](./quickstarts/voice-video-calling/get-started-with-video-calling.md) <br> <br> -### Dynamically create an Azure Communication Services identity and token --Learn how an external developer can get a token that allows your app to join a teams meeting through Azure Communication Services. --[Watch the video](https://www.youtube.com/watch?v=OgE72PGq6TM&pp=ygUcYXp1cmUgY29tbXVuaWNhdGlvbiBzZXJ2aWNlcw%3D%3D) +## Blog posts and case studies +Go deeper on common scenarios and learn more about how customers are using advanced Azure Communication +Services features. -[Read more about Microsoft Cloud Integrations](https://aka.ms/microsoft-cloud) -[View the Microsoft Cloud Integrations repo](https://github.com/microsoft/microsoftcloud) +### Generate and send SMS and email using Azure OpenAI services and Azure Communication Services -[Read the tutorial on integrating with Teams](https://aka.ms/mscloud-acs-teams-tutorial) +Learn how to us openAI and Azure Communication Services to send automated alerts. -[Read more about the UI Library](https://aka.ms/acs-ui-library) +[Read the blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/generate-and-send-sms-and-email-using-azure-openai-services-and/ba-p/3836098) -[Read the documentation on Azure Functions](https://aka.ms/msazure--functions) +[Read the step-by-step instructions](https://github.com/pnp/powerplatform-samples/blob/main/samples/contoso-school/lab/manual.md) -[View the Graph Explorer](https://aka.ms/ge) +[Check out the pre-built solution](https://github.com/pnp/powerplatform-samples/tree/main/samples/contoso-school) +<br> +<br> <br> <br> -### Deploy an Azure Communication Services app to Azure -Learn how to quickly and easily deploy your Azure Communication Services app to Azure. +### Azure Communications Services at Microsoft Build -[Watch the video](https://www.youtube.com/watch?v=JYs5CPyu2Io&pp=ygUcYXp1cmUgY29tbXVuaWNhdGlvbiBzZXJ2aWNlcw%3D%3D) +A recap of all of the Azure Communication Services sessions and discussions at Microsoft Build. -[Read more about Microsoft Cloud Integrations](https://aka.ms/microsoft-cloud) +[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/build-communication-apps-for-microsoft-teams-users-with-azure/ba-p/3775688) -[View the Microsoft Cloud Integrations repo](https://github.com/microsoft/microsoftcloud) +[View the UI Library documentation](https://azure.github.io/communication-ui-library/) -[Read the tutorial on integrating with Teams](https://aka.ms/mscloud-acs-teams-tutorial) -[Read more about the UI Library](https://aka.ms/acs-ui-library) +<br> +<br> ++## From the community +See examples and get inspired by what's being done in the community of Azure Communication Services users. -[Read the documentation on Azure Functions](https://aka.ms/msazure--functions) -[View the Graph Explorer](https://aka.ms/ge) +### Build AI-assisted communication workflows for customer engagement -<br> -<br> +Listen to Azure Communication Services PMs Ashwinder Bhatti and Anuj Bhatia talk about how to use Azure Communication Services features and tools to build intelligent workflows that businesses can use to improve customer engagement. -<br> +[Watch the video](https://youtu.be/EYTjH1xrmtI) -## New features -Get detailed information on the latest Azure Communication Services feature launches. -### Email service now generally available +[Learn more about Azure Cognitive Services](https://azure.microsoft.com/products/cognitive-services/) -Azure Communication Services announces the general availability of our Email service. Email is powered by Exchange Online and meets the security and privacy requirements of enterprises. +[Learn more about Azure Event Grid](../event-grid/overview.md) -[Read about ACS Email](https://techcommunity.microsoft.com/t5/azure-communication-services/simpler-faster-azure-communication-services-email-now-generally/ba-p/3788541) <br> <br> +### Create custom virtual meetings apps with Azure Communication Services and Microsoft Teams -### View of April's new features +Join Microsoft PMs Tomas Chladek and Ben Olson as they discuss how to create virtual meetings applications that use Azure Communication Services and interop seamlessly with Microsoft Teams -In April, we launched a host of new features, including: -* Troubleshooting capability in UI library for native -* Toll-free verification -* SMS insights dashboard -* and others... +[Watch the video](https://youtu.be/IBCp_-dk_m0) ++[Learn how to set up the Microsoft Teams virtual appointments app](https://learn.microsoft.com/microsoft-365/frontline/virtual-appointments-app) ++[Read more about Microsoft Teams Premium](https://www.microsoft.com/microsoft-teams/premium) -[View the complete list](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-april-2023-feature-updates/ba-p/3786509) of all new features added to Azure Communication Services in April. <br> <br> -<br> +### Building an SMS generator with short URLs using Azure Functions, Storage, and Communication Services -## Blog posts and case studies -Go deeper on common scenarios and learn more about how customers are using advanced Azure Communication -Services features. +Learn how to convert a lengthy URL into a format that fits the format of SMS, and then send the SMS using Azure Communication Services. +[Watch the video](https://youtu.be/Knctudbao1o) -### ABN AMRO case study +[Read the accompanying tutorial](https://aka.ms/sms-shorturl) -ABN AMRO used Azure Communication Services to make it easier for customers to get financial advice from anywhere. And they boosted their NPS in the process! +[Read the quickstart on how to send an SMS using Azure Communication Services](./quickstarts/sms/send.md) -[Read the full story](https://customers.microsoft.com/story/1607768338625418317-abnamro-bankingandcapitalmarkets-microsofteams) <br> <br> +### Building on the Microsoft Cloud: Audio/video calling from a custom app ++Join Microsoft Senior Cloud Advocate Ayca Bas and Principal Content Engineer Dan Wahlin as they share how Microsoft Cloud services can empower your apps with audio/video communication. -### Get insights from customer interactions with Azure Communication Services and OpenAI +[Watch the video](https://build.microsoft.com/sessions/78b513e3-6e5b-4c4a-a3da-d663219ed674?source=/speakers/2432ad6b-4c45-44ae-b1d6-2c0334e7eb33) -Use the gold mine of customer conversations to automatically generate customer insights and create better customer experiences. +[Read the accompanying tutorial](https://aka.ms/mscloud-acs-teams-tutorial) -[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/get-insights-from-customer-interactions-with-azure-communication/ba-p/3783858) +[Read the quickstart on how to send an SMS using Azure Communication Services](./quickstarts/sms/send.md) -[Read about the Azure OpenAI service](https://azure.microsoft.com/products/cognitive-services/openai-service/) <br> <br> +<br> -### Latest updates to the UI library --Get up-to-date on the latest additions to the Azure Communication Services UI library. UI library makes it easier to create custom applications with only a few lines of code. +### View of May's new features -[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/build-communication-apps-for-microsoft-teams-users-with-azure/ba-p/3775688) --[View the UI Library documentation](https://azure.github.io/communication-ui-library/) +In May, we launched a host of new features, including: +* Simulcast support on Edge/Chrome desktop +* Inline image support and other Teams interoperability improvements +* Skip setup screen for UI Library native +* Raised hand +* Power Automate inbound SMS connector +* and others... +[View the complete list](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-may-2023-feature-updates/ba-p/3813869) of all new features added to Azure Communication Services in April. <br> <br> +<br> + -Enjoy all of these new features. Be sure to check back here periodically for more news and updates on all of the new capabilities we've added to our platform! For a complete list of new features and bug fixes, visit our [releases page](https://github.com/Azure/Communication/releases) on GitHub. +Enjoy all of these new features. Be sure to check back here periodically for more news and updates on all of the new capabilities we've added to our platform! For a complete list of new features and bug fixes, visit our [releases page](https://github.com/Azure/Communication/releases) on GitHub. For more blog posts, as they're released, visit the [Azure Communication Services blog](https://techcommunity.microsoft.com/t5/azure-communication-services/bg-p/AzureCommunicationServicesBlog) |
communications-gateway | Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md | To enable the application, add the Application ID of the system-assigned managed 1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration). 1. Add a new **Application Id**, using the Application ID that you found. -## 6. Register your deployment's domain name in Active Directory +## 8. Register your deployment's domain name in Active Directory Microsoft Teams only sends traffic to domains that you've confirmed that you own. Your Azure Communications Gateway deployment automatically receives an autogenerated fully qualified domain name (FQDN). You need to add this domain name to your Active Directory tenant as a custom domain name, share the details with your onboarding team and then verify the domain name. This process confirms that you own the domain. |
cosmos-db | Hierarchical Partition Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md | -Azure Cosmos DB distributes your data across logical and physical partitions based on your partition key to enable horizontal scaling. With hierarchical partition keys, or subpartitoning, you can now configure up to a three level hierarchy for your partition keys to further optimize data distribution and enable higher scale. +Azure Cosmos DB distributes your data across logical and physical partitions based on your partition keys to support horizontal scaling. By using hierarchical partition keys (also called *subpartitoning*), you can configure up to a three-level hierarchy for your partition keys to further optimize data distribution and for a higher level of scaling. -If you use synthetic keys today or have scenarios where partition keys can exceed 20 GB of data, subpartitioning can help. With this feature, logical partition key prefixes can exceed 20 GB and 10,000 RU/s, and queries by prefix are efficiently routed to the subset of partitions with the data. +If you use synthetic keys today or if you have scenarios in which partition keys can exceed 20 GB of data, subpartitioning can help. If you use this feature, logical partition key prefixes can exceed 20 GB and 10,000 request units per second (RU/s). Queries by prefix are efficiently routed to the subset of partitions that hold the data. -## Guidance on choosing hierarchical partition keys +## Choose your hierarchical partition keys -Hierarchical partition keys are highly recommended to users that may have multi-tenant applications. This recommendation exists because hierarchical partitions allow users to scale beyond the logical partition key limit of 20 GB. If your current partition key or a single partition key is frequently hitting 20 GB, hierarchical partitions are a great choice for your workload. When choosing your hierarchical partition keys, it's important to keep the following general partitioning concepts in mind depending on your workload: +If you have multitenant applications, we recommend that you use hierarchical partition keys. Hierarchical partitions allow you to scale beyond the logical partition key limit of 20 GB. If your current partition key or if a single partition key is frequently reaching 20 GB, hierarchical partitions are a great choice for your workload. -For all containers, **each level** of the full path (starting with the very **first level**) of your hierarchical partition key should: +When you choose your hierarchical partition keys, it's important to keep the following general partitioning concepts in mind: -- Have a high cardinality. In other words, the first, second, and third (if applicable) keys of the hierarchical partition should all have a wide range of possible values.-- Spread request unit (RU) consumption and data storage evenly across all logical partitions. This spread ensures even RU consumption and storage distribution across your physical partitions.+- For *all* containers, *each level* of the full path (starting with the *first level*) of your hierarchical partition key should: -For large read-heavy workloads, we recommend choosing hierarchical partition keys that appear frequently in your queries. For example, a workload that frequently runs queries to filter out specific user sessions in a multi-tenant application can benefit from hierarchical partition keys of TenantId, UserId, and SessionId in that order. Queries can be efficiently routed to only the relevant physical partitions by including the partition key in the filter predicate. For more information on partition keys for read-heavy workloads, see the [partitioning overview](partitioning-overview.md). + - **Have a high cardinality**. The first, second, and third (if applicable) keys of the hierarchical partition should all have a wide range of possible values. + - **Spread request unit (RU) consumption and data storage evenly across all logical partitions**. This spread ensures even RU consumption and storage distribution across your physical partitions. ++- For *large, read-heavy workloads*, we recommend that you choose hierarchical partition keys that appear frequently in your queries. For example, a workload that frequently runs queries to filter out specific user sessions in a multitenant application can benefit from hierarchical partition keys of `TenantId`, `UserId`, and `SessionId`, in that order. Queries can be efficiently routed to only the relevant physical partitions by including the partition key in the filter predicate. For more information about choosing partition keys for read-heavy workloads, see the [partitioning overview](partitioning-overview.md). ## Example use case -Suppose you have a multi-tenant scenario where you store event information for users in each tenant. This event information could include event occurrences including, but not limited to, as sign-in, clickstream, or payment events. +Suppose you have a multitenant scenario in which you store event information for users in each tenant. The event information might have event occurrences including but not limited to sign-in, clickstream, or payment events. -In a real world scenario, some tenants can grow large with thousands of users, while the many other tenants are smaller with a few users. Partitioning by **/TenantId** may lead to exceeding Azure Cosmos DB's 20-GB storage limit on a single logical partition, while partitioning by **/UserId** makes all queries on a tenant cross-partition. Both approaches have significant downsides. +In a real-world scenario, some tenants can grow large, with thousands of users, while the many other tenants are smaller and have a few users. Partitioning by `/TenantId` might lead to exceeding the Azure Cosmos DB 20-GB storage limit on a single logical partition. Partitioning by `/UserId` makes all queries on a tenant cross-partition. Both approaches have significant downsides. -Using a synthetic partition key that combines **TenantId** and **UserId** adds complexity to the application. Additionally, the synthetic partition key queries for a tenant are still cross-partition, unless all users are known and specified in advance. +Using a synthetic partition key that combines `TenantId` and `UserId` adds complexity to the application. Additionally, the synthetic partition key queries for a tenant are still cross-partition, unless all users are known and specified in advance. -With hierarchical partition keys, we can partition first on **TenantId**, and then **UserId**. If you expect the **TenantId** and **UserId** combination to produce partitions that exceed 20 GB, you can even partition further down to another level, such as **SessionId**. The overall depth can't exceed three levels. When a physical partition exceeds 50 GB of storage, Azure Cosmos DB automatically splits the physical partition so that roughly half of the data is on one physical partition, and half on the other. Effectively, subpartitioning means that a single TenantId can exceed 20 GB of data, and it's possible for a TenantId's data to span multiple physical partitions. +With hierarchical partition keys, you can partition first on `TenantId`, and then on `UserId`. If you expect the `TenantId` and `UserId` combination to produce partitions that exceed 20 GB, you can even partition further down to another level, such as on `SessionId`. The overall depth can't exceed three levels. When a physical partition exceeds 50 GB of storage, Azure Cosmos DB automatically splits the physical partition so that roughly half of the data is on one physical partition, and half is on the other. Effectively, subpartitioning means that a single `TenantId` value can exceed 20 GB of data, and it's possible for `TenantId` data to span multiple physical partitions. -Queries that specify either the **TenantId**, or both **TenantId** and **UserId** is efficiently routed to only the subset of physical partitions that contain the relevant data. Specifying the full or prefix subpartitioned partition key path effectively avoids a full fan-out query. For example, if the container had 1000 physical partitions, but a particular **TenantId** was only on five of them, the query would only be routed to the smaller number of relevant physical partitions. +Queries that specify either `TenantId`, or both `TenantId` and `UserId`, are efficiently routed to only the subset of physical partitions that contain the relevant data. Specifying the full or prefix subpartitioned partition key path effectively avoids a full fan-out query. For example, if the container had 1,000 physical partitions, but a specific `TenantId` value was only on 5 physical partitions, the query would be routed to the smaller number of relevant physical partitions. ## Use item ID in hierarchy -If your container has a property that has a large range of possible values, it's likely a great partition key choice for the last level of your hierarchy. One possible example of this type of property is the **item ID**. The system property item ID exists in every item in your container. Adding item ID as another level guarantees that you can scale beyond the logical partition key limit of 20 GB. You can scale beyond this limit for the first or first and second level of keys. +If your container has a property that has a large range of possible values, the property is likely a great partition key choice for the last level of your hierarchy. One possible example of this type of property is the *item ID*. The system property item ID exists in every item in your container. Adding the item ID as another level guarantees that you can scale beyond the logical partition key limit of 20 GB. You can scale beyond this limit for the first level or for the first and second levels of keys. -For example, suppose we have a container for a multi-tenant workload partitioned by `TenantId` and `UserId`. If it's possible for a single combination of `TenantId` and `UserId` to exceed 20 GB, then partitioning with three levels of keys, where the third level key has high cardinality is recommended. An example of this scenario is if the third level key is a GUID with naturally high cardinality. It's unlikely that the combination of TenantId, UserId, and a GUID exceeds 20 GB, so the combination of TenantId and UserId can effectively scale beyond 20 GB. +For example, you might have a container for a multitenant workload that's partitioned by `TenantId` and `UserId`. If it's possible for a single combination of `TenantId` and `UserId` to exceed 20 GB, then we recommend that you partition by using three levels of keys, and in which the third-level key has high cardinality. An example of this scenario is if the third-level key is a GUID that has naturally high cardinality. It's unlikely that the combination of `TenantId`, `UserId`, and a GUID exceeds 20 GB, so the combination of `TenantId` and `UserId` can effectively scale beyond 20 GB. -For more information on choosing item ID as a partition key, visit our [partitioning documentation.](partitioning-overview.md). +For more information about using item ID as a partition key, see the [partitioning overview](partitioning-overview.md). -## Getting started +## Get started > [!IMPORTANT]-> Working with containers that use hierarchical partition keys is supported only in following SDK versions. You must use a supported SDK to create new containers with hierarchical partition keys and to perform CRUD/query operations on the data. -> If you would like to use an SDK or connector that isn't currently supported, please file a request on our [community forum](https://feedback.azure.com/d365community/forum/3002b3be-0d25-ec11-b6e6-000d3a4f0858). +> Working with containers that use hierarchical partition keys is supported only in following SDK versions. You must use a supported SDK to create new containers with hierarchical partition keys and to perform create, read, update, and delete (CRUD) or query operations on the data. +> If you want to use an SDK or connector that isn't currently supported, please file a request on our [community forum](https://feedback.azure.com/d365community/forum/3002b3be-0d25-ec11-b6e6-000d3a4f0858). Find the latest preview version of each supported SDK: | SDK | Supported versions | Package manager link | | | | |-| **.NET SDK v3** | *>= 3.33.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0/> | -| **Java SDK v4** | *>= 4.42.0* | <https://github.com/Azure/azure-sdk-for-jav#4420-2023-03-17/> | -| **JavaScript SDK v3** | *3.17.4-beta.1* | <https://www.npmjs.com/package/@azure/cosmos/v/3.17.4-beta.1/> | +| .NET SDK v3 | >= 3.33.0 | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0/> | +| Java SDK v4 | >= 4.42.0 | <https://github.com/Azure/azure-sdk-for-jav#4420-2023-03-17/> | +| JavaScript SDK v3 | 3.17.4-beta.1 | <https://www.npmjs.com/package/@azure/cosmos/v/3.17.4-beta.1/> | ++## Create a container by using hierarchical partition keys ++To get started, create a new container by using a predefined list of subpartitioning key paths up to three levels of depth. -## Create a container with hierarchical partition keys +You can create a new container by using one of these options: -To get started, create a new container using a predefined list of subpartitioning key paths up to three levels of depth. +- Azure portal +- SDK +- Azure Resource Manager template +- Azure Cosmos DB emulator -### Using the portal +### Azure portal -The simplest way to create a container and specify hierarchical partition keys is with the Azure portal. +The simplest way to create a container and specify hierarchical partition keys is by using the Azure portal. 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to the existing Azure Cosmos DB for NoSQL account page. +1. Go to the existing Azure Cosmos DB for NoSQL account page. -1. From the Azure Cosmos DB for NoSQL account page, select the **Data Explorer** navigation menu option. +1. On the left menu, select **Data Explorer**. - :::image type="content" source="media/hierarchical-partition-keys/data-explorer-menu-option.png" lightbox="media/hierarchical-partition-keys/data-explorer-menu-option.png" alt-text="Screenshot of the page for a new Azure Cosmos DB for NoSQL account with the Data Explorer option highlighted."::: + :::image type="content" source="media/hierarchical-partition-keys/data-explorer-menu-option.png" lightbox="media/hierarchical-partition-keys/data-explorer-menu-option.png" alt-text="Screenshot that shows the page for a new Azure Cosmos DB for NoSQL account with the Data Explorer menu option highlighted."::: -1. In the **Data Explorer**, select the **New Container** option. +1. On **Data Explorer**, select the **New Container** option. - :::image type="content" source="media/hierarchical-partition-keys/new-container-option.png" lightbox="media/hierarchical-partition-keys/new-container-option.png" alt-text="Screenshot of the New Container option within the Data Explorer."::: + :::image type="content" source="media/hierarchical-partition-keys/new-container-option.png" lightbox="media/hierarchical-partition-keys/new-container-option.png" alt-text="Screenshot of the New Container option within Data Explorer."::: -1. In the **New Container** dialog, specify `/TenantId` for the partition key field. Enter any value for the remaining fields (database, container, throughput, etc.). +1. In **New Container**, for **Partition key**, enter `/TenantId`. For the remaining fields, enter any value that matches your scenario. > [!NOTE]- > We are using `/TenantId` as an example here. You can specify any key for the first level when implementing hierarchical partition keys on your own containers. + > We use `/TenantId` as an example here. You can specify any key for the first level when you implement hierarchical partition keys on your own containers. -1. Select **Add Hierarchical Partition Key** twice. +1. Select **Add hierarchical partition key** twice. :::image type="content" source="media/hierarchical-partition-keys/add-hierarchical-partition-key.png" lightbox="media/hierarchical-partition-keys/add-hierarchical-partition-key.png" alt-text="Screenshot of the button to add a new hierarchical partition key."::: -1. For the second and third tiers of subpartitioning, specify `/UserId` and `/SessionId` respectively. +1. For the second and third tiers of subpartitioning, enter `/UserId` and `/SessionId` respectively. :::image type="content" source="media/hierarchical-partition-keys/hierarchical-partition-key-list.png" lightbox="media/hierarchical-partition-keys/hierarchical-partition-key-list.png" alt-text="Screenshot of a list of three hierarchical partition keys."::: 1. Select **OK** to create the container. -### Using an SDK +### SDK -When creating a new container using the SDK, define a list of subpartitioning key paths up to three levels of depth. Use the list of subpartition keys when configuring the properties of the new container. +When you create a new container by using the SDK, define a list of subpartitioning key paths up to three levels of depth. Use the list of subpartition keys when you configure the properties of the new container. #### [.NET SDK v3](#tab/net-v3) List<string> subpartitionKeyPaths = new List<string> { "/SessionId" }; -// Create container properties object +// Create a container properties object ContainerProperties containerProperties = new ContainerProperties( id: "<container-name>", partitionKeyPaths: subpartitionKeyPaths ); -// Create container - subpartitioned by TenantId -> UserId -> SessionId +// Create a container that's subpartitioned by TenantId > UserId > SessionId Container container = await database.CreateContainerIfNotExistsAsync(containerProperties, throughput: 400); ``` subpartitionKeyPaths.add("/TenantId"); subpartitionKeyPaths.add("/UserId"); subpartitionKeyPaths.add("/SessionId"); -//Create a partition key definition object with Kind("MultiHash") and Version V2 +//Create a partition key definition object with Kind ("MultiHash") and Version V2 PartitionKeyDefinition subpartitionKeyDefinition = new PartitionKeyDefinition(); subpartitionKeyDefinition.setPaths(subpartitionKeyPaths); subpartitionKeyDefinition.setKind(PartitionKind.MULTI_HASH); subpartitionKeyDefinition.setVersion(PartitionKeyDefinitionVersion.V2); -// Create container properties object +// Create a container properties object CosmosContainerProperties containerProperties = new CosmosContainerProperties("<container-name>", subpartitionKeyDefinition); -// Create throughput properties object +// Create a throughput properties object ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400); -// Create container - subpartitioned by TenantId -> UserId -> SessionId +// Create a container that's subpartitioned by TenantId > UserId > SessionId Mono<CosmosContainerResponse> container = database.createContainerIfNotExists(containerProperties, throughputProperties); ``` -## Using Azure Resource Manager templates +### Azure Resource Manager templates -The Azure Resource Manager template for a subpartitioned container is mostly identical to a standard container with the only key difference being the value of the ``properties/partitionKey`` path. For more information about creating an Azure Resource Manager template for an Azure Cosmos DB resource, see [the Azure Resource Manager template reference for Azure Cosmos DB](/azure/templates/microsoft.documentdb/databaseaccounts). +The Azure Resource Manager template for a subpartitioned container is almost identical to a standard container. The only key difference is the value of the `properties/partitionKey` path. For more information about creating an Azure Resource Manager template for an Azure Cosmos DB resource, see the [Azure Resource Manager template reference for Azure Cosmos DB](/azure/templates/microsoft.documentdb/databaseaccounts). -Configure the ``partitionKey`` object with the following values to create a subpartitioned container. +Configure the `partitionKey` object by using the values in the following table to create a subpartitioned container: | Path | Value | | | |-| **paths** | List of hierarchical partition keys (max three levels of depth) | -| **kind** | ``MultiHash`` | -| **version** | ``2`` | +| `paths` | List of hierarchical partition keys (max three levels of depth) | +| `kind` | `MultiHash` | +| `version` | `2` | -### Example partition key definition +#### Example partition key definition -For example, assume we have a hierarchical partition key composed of **TenantId -> UserId -> SessionId**. The ``partitionKey`` object would be configured to include all three values in the **paths** property, a **kind** value of ``MultiHash``, and a **version** value of ``2``. +For example, assume that you have a hierarchical partition key that's composed of `TenantId` > `UserId` > `SessionId`. The `partitionKey` object would be configured to include all three values in the `paths` property, a `kind` value of `MultiHash`, and a `version` value of `2`. #### [Bicep](#tab/bicep) partitionKey: { -For more information about the ``partitionKey`` object, see [ContainerPartitionKey specification](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers#containerpartitionkey). +For more information about the `partitionKey` object, see the [ContainerPartitionKey specification](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers#containerpartitionkey). -## Using the Azure Cosmos DB emulator +### Azure Cosmos DB emulator -You can test the subpartitioning feature using the latest version of the local emulator for Azure Cosmos DB. To enable subparitioning on the emulator, start the emulator from the installation directory with the ``/EnablePreview`` flag. +You can test the subpartitioning feature by using the latest version of the local emulator for Azure Cosmos DB. To enable subparitioning on the emulator, start the emulator from the installation directory with the `/EnablePreview` flag: ```powershell .\CosmosDB.Emulator.exe /EnablePreview You can test the subpartitioning feature using the latest version of the local e For more information, see [Azure Cosmos DB emulator](./local-emulator.md). -## Use the SDKs to work with containers with hierarchical partition keys +<a name="use-the-sdks-to-work-with-containers-with-hierarchical-partition-keys"></a> -Once you have a container with hierarchical partition keys, use the previously specified versions of the .NET or Java SDKs to perform operations and execute queries on that container. +## Use the SDKs to work with containers that have hierarchical partition keys ++When you have a container that has hierarchical partition keys, use the previously specified versions of the .NET or Java SDKs to perform operations and execute queries on that container. ### Add an item to a container -There are two options to add a new item to a container with hierarchical partition keys enabled. +There are two options to add a new item to a container with hierarchical partition keys enabled: ++- Automatic extraction +- Manually specify the path #### Automatic extraction If you pass in an object with the partition key value set, the SDK can automatic ##### [.NET SDK v3](#tab/net-v3) ```csharp-// Create new item +// Create a new item UserSession item = new UserSession() { id = "f7da01b0-090b-41d2-8416-dacae09fbb4a", UserSession item = new UserSession() SessionId = "0000-11-0000-1111" }; -// Pass in the object and the SDK will automatically extract the full partition key path +// Pass in the object, and the SDK automatically extracts the full partition key path ItemResponse<UserSession> createResponse = await container.CreateItemAsync(item); ``` ##### [Java SDK v4](#tab/java-v4) ```java-// Create new item +// Create a new item UserSession item = new UserSession(); item.setId("f7da01b0-090b-41d2-8416-dacae09fbb4a"); item.setTenantId("Microsoft"); item.setUserId("8411f20f-be3e-416a-a3e7-dcd5a3c1f28b"); item.setSessionId("0000-11-0000-1111"); -// Pass in the object and the SDK will automatically extract the full partition key path +// Pass in the object, and the SDK automatically extracts the full partition key path Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item); ``` -#### Manually specify path +#### Manually specify the path -The ``PartitionKeyBuilder`` class in the SDK can construct a value for a previously defined hierarchical partition key path. Use this class when adding a new item to a container that has subpartitioning enabled. +The `PartitionKeyBuilder` class in the SDK can construct a value for a previously defined hierarchical partition key path. Use this class when you add a new item to a container that has subpartitioning enabled. > [!TIP]-> At scale, it is often more performant to specify the full partition key path even if the SDK can extract the path from the object. +> At scale, performance might be improved if you specify the full partition key path, even if the SDK can extract the path from the object. ##### [.NET SDK v3](#tab/net-v3) ```csharp-// Create new item object +// Create a new item object PaymentEvent item = new PaymentEvent() { id = Guid.NewGuid().ToString(), ItemResponse<PaymentEvent> createResponse = await container.CreateItemAsync(item ##### [Java SDK v4](#tab/java-v4) ```java-// Create new item object +// Create a new item object UserSession item = new UserSession(); item.setTenantId("Microsoft"); item.setUserId("8411f20f-be3e-416a-a3e7-dcd5a3c1f28b"); Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item ### Perform a key/value lookup (point read) of an item -Key/value lookups (point reads) are performed in a manner similar to a non-subpartitioned container. For example, assume we have a hierarchical partition key composed of **TenantId -> UserId -> SessionId**. The unique identifier for the item is a Guid, represented as a string that serves as a unique document transaction identifier. To perform a point read on a single item, pass in the ``id`` property of the item and the full value for the partition key including all three components of the path. +Key/value lookups (point reads) are performed in a way that's similar to a non-subpartitioned container. For example, assume you have a hierarchical partition key that consists of `TenantId` > `UserId` > `SessionId`. The unique identifier for the item is a GUID. It's represented as a string that serves as a unique document transaction identifier. To perform a point read on a single item, pass in the `id` property of the item and the full value for the partition key, including all three components of the path. -#### [.NET SDK v3](#tab/net-v3) +##### [.NET SDK v3](#tab/net-v3) ```csharp // Store the unique identifier ItemResponse<UserSession> readResponse = await container.ReadItemAsync<UserSessi ); ``` -#### [Java SDK v4](#tab/java-v4) +##### [Java SDK v4](#tab/java-v4) ```java // Store the unique identifier Mono<CosmosItemResponse<UserSession>> readResponse = container.readItem(id, part ### Run a query -The SDK code to run a query on a subpartitioned container is identical to running a query on a non-subpartitioned container. +The SDK code that you use to run a query on a subpartitioned container is identical to running a query on a non-subpartitioned container. -When the query specifies all values of the partition keys in the ``WHERE`` filter or a prefix of the key hierarchy, the SDK automatically routes the query to the corresponding physical partitions. Queries that provide only the "middle" of the hierarchy are cross partition queries. +When the query specifies all values of the partition keys in the `WHERE` filter or in a prefix of the key hierarchy, the SDK automatically routes the query to the corresponding physical partitions. Queries that provide only the "middle" of the hierarchy are cross-partition queries. -For example, assume we have a hierarchical partition key composed of **TenantId -> UserId -> SessionId**. The components of the query's filter determines if the query is a single-partition, targeted cross-partition, or fan out query. +For example, consider a hierarchical partition key that's composed of `TenantId` > `UserId` > `SessionId`. The components of the query's filter determines if the query is a single-partition query, a targeted cross-partition query, or a fan-out query. | Query | Routing | | | |-| ``SELECT * FROM c WHERE c.TenantId = 'Microsoft' AND c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b' AND c.SessionId = '0000-11-0000-1111'`` | Routed to the **single logical and physical partition** that contains the data for the specified values of ``TenantId``, ``UserId`` and ``SessionId``. | -| ``SELECT * FROM c WHERE c.TenantId = 'Microsoft' AND c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b'`` | Routed to only the **targeted subset of logical and physical partition(s)** that contain data for the specified values of ``TenantId`` and ``UserId``. This query is a targeted cross-partition query that returns data for a specific user in the tenant. | -| ``SELECT * FROM c WHERE c.TenantId = 'Microsoft'`` | Routed to only the **targeted subset of logical and physical partition(s)** that contain data for the specified value of ``TenantId``. This query is a targeted cross-partition query that returns data for all users in a tenant. | -| ``SELECT * FROM c WHERE c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b'`` | Routed to **all physical partitions**, resulting in a fan-out cross-partition query. | -| ``SELECT * FROM c WHERE c.SessionId = '0000-11-0000-1111'`` | Routed to **all physical partitions**, resulting in a fan-out cross-partition query. | +| `SELECT * FROM c WHERE c.TenantId = 'Microsoft' AND c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b' AND c.SessionId = '0000-11-0000-1111'` | Routed to the **single logical and physical partition** that contains the data for the specified values of `TenantId`, `UserId`, and `SessionId`. | +| `SELECT * FROM c WHERE c.TenantId = 'Microsoft' AND c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b'` | Routed to only the **targeted subset of logical and physical partition(s)** that contain data for the specified values of `TenantId` and `UserId`. This query is a targeted cross-partition query that returns data for a specific user in the tenant. | +| `SELECT * FROM c WHERE c.TenantId = 'Microsoft'` | Routed to only the **targeted subset of logical and physical partition(s)** that contain data for the specified value of `TenantId`. This query is a targeted cross-partition query that returns data for all users in a tenant. | +| `SELECT * FROM c WHERE c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b'` | Routed to **all physical partitions**, resulting in a fan-out cross-partition query. | +| `SELECT * FROM c WHERE c.SessionId = '0000-11-0000-1111'` | Routed to **all physical partitions**, resulting in a fan-out cross-partition query. | #### Single-partition query on a subpartitioned container -Here's an example of running a query that includes all of the levels of subpartitioning effectively making the query a single-partition query. +Here's an example of running a query that includes all the levels of subpartitioning, effectively making the query a single-partition query. ##### [.NET SDK v3](#tab/net-v3) pagedResponse.byPage().flatMap(fluxResponse -> { #### Targeted multi-partition query on a subpartitioned container -Here's an example of a query including a subset of the levels of subpartitioning effectively making this query a targeted multi-partition query. +Here's an example of a query that includes a subset of the levels of subpartitioning, effectively making this query a targeted multi-partition query. ##### [.NET SDK v3](#tab/net-v3) pagedResponse.byPage().flatMap(fluxResponse -> { ## Limitations and known issues -- Working with containers that use hierarchical partition keys is supported only in .NET v3, Java v4 SDKs, and the preview version of the JavaScript SDK. You must use a supported SDK to create new containers with hierarchical partition keys and to perform CRUD/query operations on the data. Support for other SDKs, like Python, isn't yet available.-- There are limitations with various Azure Cosmos DB connectors (ex. Azure Data Factory).-- You can only specify hierarchical partition keys up to three layers in depth.-- Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later. To use hierarchical partitions on existing containers, you should create a new container with the hierarchical partition keys set and move the data using [container copy jobs](intra-account-container-copy.md).-- Hierarchical partition keys are currently supported only for API for NoSQL accounts (API for MongoDB and Cassandra aren't currently supported).+- Working with containers that use hierarchical partition keys is supported only in the .NET v3 SDK, in the Java v4 SDK, and in the preview version of the JavaScript SDK. You must use a supported SDK to create new containers that have hierarchical partition keys and to perform CRUD or query operations on the data. Support for other SDKs, including Python, isn't available currently. +- There are limitations with various Azure Cosmos DB connectors (for example, with Azure Data Factory). +- You can specify hierarchical partition keys only up to three layers in depth. +- Hierarchical partition keys can currently be enabled only on new containers. You must set partition key paths at the time of container creation, and you can't change them later. To use hierarchical partitions on existing containers, create a new container with the hierarchical partition keys set and move the data by using [container copy jobs](intra-account-container-copy.md). +- Hierarchical partition keys are currently supported only for the API for NoSQL accounts. The APIs for MongoDB and Cassandra aren't currently supported. ## Next steps -- See the FAQ on [hierarchical partition keys.](hierarchical-partition-keys-faq.yml)-- Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)-- Learn more about [using Azure Resource Manager templates with Azure Cosmos DB.](/azure/templates/microsoft.documentdb/databaseaccounts)+- See the FAQ on [hierarchical partition keys](hierarchical-partition-keys-faq.yml). +- Learn more about [partitioning in Azure Cosmos DB](partitioning-overview.md). +- Learn more about [using Azure Resource Manager templates with Azure Cosmos DB](/azure/templates/microsoft.documentdb/databaseaccounts). |
cosmos-db | How To Restore In Account Continuous Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-restore-in-account-continuous-backup.md | Title: Restore a container or database into existing same account (preview) + Title: Restore a container or database to the same, existing account (preview) -description: Restore a deleted container or database to an existing same Azure Cosmos DB account using Azure portal, PowerShell, or CLI when using continuous backup mode. +description: Restore a deleted container or database to the same, existing Azure Cosmos DB account by using the Azure portal, the Azure CLI, Azure PowerShell, or an Azure Resource Manager template in continuous backup mode. Last updated 05/08/2023 zone_pivot_groups: azure-cosmos-db-apis-nosql-mongodb-gremlin-table -# Restore a deleted container or database into same Azure Cosmos DB account (preview) +# Restore a deleted container or database to the same Azure Cosmos DB account (preview) [!INCLUDE[NoSQL, MongoDB, Gremlin, Table](includes/appliesto-nosql-mongodb-gremlin-table.md)] -Azure Cosmos DB's point-in-time same account restore feature helps you to recover from an accidental deletion of a container or database. Theis feature restores the deleted database or container to an existing same account in any region where backups exist. The continuous backup mode allows you to restore to any point of time within the last 30 days. +The Azure Cosmos DB point-in-time same-account restore feature helps you recover from an accidental deletion of a container or database. This feature restores the deleted database or container to the same, existing account in any region in which backups exist. Continuous backup mode allows you to restore to any point of time within the last 30 days. ## Prerequisites -- An existing Azure Cosmos DB account.- - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal). - - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit. +- An Azure subscription. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- An Azure Cosmos DB account. You can choose one of the following options for an Azure Cosmos DB account: + - Use an existing Azure Cosmos DB account. + - Create a [new Azure Cosmos DB account](nosql/how-to-create-account.md?tabs=azure-portal) in your Azure subscription. + - Create a [Try Azure Cosmos DB free](try-free.md) account with no commitment. ## Restore a deleted container or database -Use the Azure portal, Azure CLI, Azure PowerShell, or an Azure Resource Manager template to restore a deleted container or database into an same account. +Use the Azure portal, the Azure CLI, Azure PowerShell, or an Azure Resource Manager template to restore a deleted container or database in the same, existing account. ### [Azure portal](#tab/azure-portal) -Use the Azure portal to restore a deleted container or database (including child containers). +Use the Azure portal to restore a deleted container or database. Child containers are also restored. -1. Navigate to the [Azure portal](https://portal.azure.com/). +1. Go to the [Azure portal](https://portal.azure.com/). -1. Navigate to your Azure Cosmos DB account and open the **Point In Time Restore** page. +1. Go to your Azure Cosmos DB account, and then go to the **Point In Time Restore** page. - > [!NOTE] - > The restore page in Azure portal is only populated if you have the `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission. To learn more about this permission, see [backup and restore permissions](continuous-backup-restore-permissions.md). + > [!NOTE] + > The restore page in Azure portal is populated only if you have the `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` permission. To learn more about this permission, see [Backup and restore permissions](continuous-backup-restore-permissions.md). -1. Switch to the **Restore to same account** tab. +1. Select the **Restore to same account** tab. - :::image type="content" source="medi\in-account-switch.png" alt-text="Screenshot of the options to restore a database or container to the same account."::: + :::image type="content" source="medi\in-account-switch.png" alt-text="Screenshot of the options to restore a database or container to the same account."::: -1. In the **Database** field, enter a search query to filter the event feed to relevant deletion events for either a container or database. +1. For **Database**, enter a search query to filter the event feed for relevant deletion events for a container or a database. - :::image type="content" source="medi/event-filter.png" alt-text="Screenshot of the event filter showing deletion events for containers and databases."::: + :::image type="content" source="medi/event-filter.png" alt-text="Screenshot of the event filter showing deletion events for containers and databases."::: 1. Next, specify **Start** and **End** values to create a time window to use to filter deletion events. - :::image type="content" source="medi/date-filter.png" alt-text="Screenshot of the start and end date filters further filtering down deletion events."::: + :::image type="content" source="medi/date-filter.png" alt-text="Screenshot of the start and end date filters further filtering down deletion events."::: - > [!NOTE] - > The **Start** filter is limited to at most 30 days before the present date. + > [!NOTE] + > The **Start** filter is limited to at most 30 days before the present date. -1. Select **Refresh** to update the list of events on different resource types with your filters applied. +1. Select **Refresh** to update the list of events for different resource types with your filters applied. -1. Verify the time and select **Restore** to start restoration of the selected resource that was previously deleted. +1. Verify the time, and then select **Restore** to start restoration of the selected resource that was previously deleted. - :::image type="content" source="medi/restore-confirmation.png" alt-text="Screenshot of the confirmation dialog prior to a restore operation."::: + :::image type="content" source="medi/restore-confirmation.png" alt-text="Screenshot of the confirmation dialog prior to a restore operation."::: - > [!IMPORTANT] - > No more than three restore operations can be active at any given time on the same account. Deleting the source account while a restore operation is in-progress could result in the failure of the restore operation. + > [!IMPORTANT] + > No more than three restore operations can be active at any time on the same account. Deleting the source account while a restore operation is in progress might result in the failure of the restore operation. - > [!NOTE] - > The event feed will display certain resources as **"Not restorable"**. The feel will provide more information why the resource cannot be restored.In most cases, you will be required to restore the parent database before you can restore any of its child containers. + > [!NOTE] + > The event feed displays resources as **Not restorable**. The feed provides more information about why the resource can't be restored. In most cases, you must restore the parent database before you can restore any of the database's child containers. -1. After initiating a restore operation, track the operation using the notifications area of the Azure portal. The notification provides the status of the resource being restored. While restore is in progress, the status of the container is **Creating**. After the restore operation completes, the status will change to **Online**. +1. After you initiate a restore operation, track the operation by using the notifications area of the Azure portal. The notification provides the status of the resource that's being restored. While restore is in progress, the status of the container is **Creating**. After the restore operation completes, the status changes to **Online**. ### [Azure CLI](#tab/azure-cli) -Use Azure CLI to restore a deleted container or database (including child containers). +Use the Azure CLI to restore a deleted container or database. Child containers are also restored. > [!IMPORTANT] > The `cosmosdb-preview` extension for Azure CLI version **0.24.0** or later is required to access the in-account restore command. If you do not have the preview version installed, run `az extension add --name cosmosdb-preview --version 0.24.0`. :::zone pivot="api-nosql" -1. Retrieve a list of all live and deleted restorable database accounts using [`az cosmosdb restorable-database-account list`](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list). +1. Retrieve a list of all live and deleted restorable database accounts by using [az cosmosdb restorable-database-account list](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list): ```azurecli az cosmosdb restorable-database-account list \ Use Azure CLI to restore a deleted container or database (including child contai ] ``` -1. Use [`az cosmosdb sql restorable-database list`](/cli/azure/cosmosdb/sql/restorable-database#az-cosmosdb-sql-restorable-database-list) to list all restorable versions of databases for live accounts. +1. Use [az cosmosdb sql restorable-database list](/cli/azure/cosmosdb/sql/restorable-database#az-cosmosdb-sql-restorable-database-list) to list all restorable versions of databases for live accounts: ```azurecli az cosmosdb sql restorable-database list \ Use Azure CLI to restore a deleted container or database (including child contai --location <location> ``` - > [!NOTE] - > Listing all the restorable database deletion events allows you to choose the right database in a scenario where the actual time of existence is unknown. If the event feed contains the **Delete** operation type in its response, then itΓÇÖs a deleted database and it can be restored within the same account. The restore timestamp can be set to any timestamp before the deletion timestamp and within the retention window. + > [!NOTE] + > Listing all the restorable database deletion events allows you to choose the right database in a scenario in which the actual time of existence is unknown. If the event feed contains the **Delete** operation type in its response, then itΓÇÖs a deleted database, and it can be restored within the same account. The restore time stamp can be set to any time stamp before the deletion time stamp and within the retention window. -1. Use [`az cosmosdb sql restorable-container list`](/cli/azure/cosmosdb/sql/restorable-container#az-cosmosdb-sql-restorable-container-list) to list all the versions of restorable containers within a specific database. +1. Use [az cosmosdb sql restorable-container list](/cli/azure/cosmosdb/sql/restorable-container#az-cosmosdb-sql-restorable-container-list) to list all the versions of restorable containers within a specific database: ```azurecli az cosmosdb sql restorable-container list \ Use Azure CLI to restore a deleted container or database (including child contai --location <location> ``` - > [!NOTE] - > Listing all the restorable database deletion events allows you allows you to choose the right container in a scenario where the actual time of existence is unknown. If the event feed contains the **Delete** operation type in its response, then itΓÇÖs a deleted container and it can be restored within the same account. The restore timestamp can be set to any timestamp before the deletion timestamp and within the retention window. + > [!NOTE] + > Listing all the restorable database deletion events allows you to choose the right container in a scenario in which the actual time of existence is unknown. If the event feed contains the **Delete** operation type in its response, then itΓÇÖs a deleted container, and it can be restored within the same account. The restore time stamp can be set to any time stamp before the deletion time stamp and within the retention window. -1. Trigger a restore operation for a deleted database using [`az cosmosdb sql database restore`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-restore). +1. Initiate a restore operation for a deleted database by using [az cosmosdb sql database restore](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-restore): ```azurecli az cosmosdb sql database restore \ Use Azure CLI to restore a deleted container or database (including child contai ΓÇ» ΓÇ» --restore-timestamp <timestamp> ``` -1. Trigger a restore operation for a deleted container using [`az cosmosdb sql container restore`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-restore). +1. Initiate a restore operation for a deleted container by using [az cosmosdb sql container restore](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-restore): ```azurecli az cosmosdb sql container restore \ Use Azure CLI to restore a deleted container or database (including child contai :::zone pivot="api-mongodb" -1. Retrieve a list of all live and deleted restorable database accounts using [`az cosmosdb restorable-database-account list`](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list). +1. Retrieve a list of all live and deleted restorable database accounts by using [az cosmosdb restorable-database-account list](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list): ```azurecli az cosmosdb restorable-database-account list \ Use Azure CLI to restore a deleted container or database (including child contai ] ``` -1. Use [`az cosmosdb mongodb restorable-database list`](/cli/azure/cosmosdb/mongodb/restorable-database#az-cosmosdb-mongodb-restorable-database-list) to list all restorable versions of databases for live accounts. +1. Use [az cosmosdb mongodb restorable-database list](/cli/azure/cosmosdb/mongodb/restorable-database#az-cosmosdb-mongodb-restorable-database-list) to list all restorable versions of databases for live accounts: ```azurecli az cosmosdb mongodb restorable-database list \ Use Azure CLI to restore a deleted container or database (including child contai --location <location> ``` -1. Use [`az cosmosdb mongodb restorable-collection list`](/cli/azure/cosmosdb/mongodb/restorable-collection#az-cosmosdb-mongodb-restorable-collection-list) to list all the versions of restorable collections within a specific database. +1. Use [az cosmosdb mongodb restorable-collection list](/cli/azure/cosmosdb/mongodb/restorable-collection#az-cosmosdb-mongodb-restorable-collection-list) to list all the versions of restorable collections within a specific database: ```azurecli az cosmosdb mongodb restorable-collection list \ Use Azure CLI to restore a deleted container or database (including child contai --location <location> ``` -1. Trigger a restore operation for a deleted database using [`az cosmosdb mongodb database restore`](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-restore). +1. Initiate a restore operation for a deleted database by using [az cosmosdb mongodb database restore](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-restore): ```azurecli az cosmosdb mongodb database restore \ Use Azure CLI to restore a deleted container or database (including child contai ΓÇ» ΓÇ» --restore-timestamp <timestamp> ``` -1. Trigger a restore operation for a deleted collection using [`az cosmosdb mongodb collection restore`](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-restore). +1. Initiate a restore operation for a deleted collection by using [az cosmosdb mongodb collection restore](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-restore): ```azurecli az cosmosdb mongodb collection restore \ Use Azure CLI to restore a deleted container or database (including child contai :::zone pivot="api-gremlin" -1. Retrieve a list of all live and deleted restorable database accounts using [`az cosmosdb restorable-database-account list`](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list). +1. Retrieve a list of all live and deleted restorable database accounts by using [az cosmosdb restorable-database-account list](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list): ```azurecli az cosmosdb restorable-database-account list \ Use Azure CLI to restore a deleted container or database (including child contai ] ``` -1. Use [`az cosmosdb gremlin restorable-database list`](/cli/azure/cosmosdb/gremlin/restorable-database#az-cosmosdb-gremlin-restorable-database-list) to list all restorable versions of databases for live accounts. +1. Use [az cosmosdb gremlin restorable-database list](/cli/azure/cosmosdb/gremlin/restorable-database#az-cosmosdb-gremlin-restorable-database-list) to list all restorable versions of databases for live accounts: ```azurecli az cosmosdb gremlin restorable-database list \ Use Azure CLI to restore a deleted container or database (including child contai --location <location> ``` -1. Use [`az cosmosdb gremlin restorable-graph list`](/cli/azure/cosmosdb/gremlin/restorable-graph#az-cosmosdb-gremlin-restorable-graph-list) to list all the versions of restorable graphs within a specific database. +1. Use [az cosmosdb gremlin restorable-graph list](/cli/azure/cosmosdb/gremlin/restorable-graph#az-cosmosdb-gremlin-restorable-graph-list) to list all the versions of restorable graphs within a specific database: ```azurecli az cosmosdb gremlin restorable-graph list \ Use Azure CLI to restore a deleted container or database (including child contai --location <location> ``` -1. Trigger a restore operation for a deleted database using [`az cosmosdb gremlin database restore`](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-restore). +1. Initiate a restore operation for a deleted database by using [az cosmosdb gremlin database restore](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-restore): ```azurecli az cosmosdb gremlin database restore \ Use Azure CLI to restore a deleted container or database (including child contai --restore-timestamp <timestamp> ``` -1. Trigger a restore operation for a deleted graph using [`az cosmosdb gremlin graph restore`](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-restore). +1. Initiate a restore operation for a deleted graph by using [az cosmosdb gremlin graph restore](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-restore): ```azurecli az cosmosdb gremlin database restore \ Use Azure CLI to restore a deleted container or database (including child contai :::zone pivot="api-table" -1. Retrieve a list of all live and deleted restorable database accounts using [`az cosmosdb restorable-database-account list`](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list). +1. Retrieve a list of all live and deleted restorable database accounts by using [az cosmosdb restorable-database-account list](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list): ```azurecli az cosmosdb restorable-database-account list \ Use Azure CLI to restore a deleted container or database (including child contai ] ``` -1. Use [`az cosmosdb table restorable-table list`](/cli/azure/cosmosdb/table/restorable-table#az-cosmosdb-table-restorable-table-list) to list all restorable versions of tables for live accounts. +1. Use [az cosmosdb table restorable-table list](/cli/azure/cosmosdb/table/restorable-table#az-cosmosdb-table-restorable-table-list) to list all restorable versions of tables for live accounts: ```azurecli az cosmosdb table restorable-table list \ Use Azure CLI to restore a deleted container or database (including child contai --location <location> ``` -1. Trigger a restore operation for a deleted table using [`az cosmosdb table restore`](/cli/azure/cosmosdb/table#az-cosmosdb-table-restore). +1. Initiate a restore operation for a deleted table by using [az cosmosdb table restore](/cli/azure/cosmosdb/table#az-cosmosdb-table-restore): ```azurecli az cosmosdb table restore \ Use Azure CLI to restore a deleted container or database (including child contai ### [Azure PowerShell](#tab/azure-powershell) -Use Azure PowerShell to restore a deleted container or database (including child containers). +Use Azure PowerShell to restore a deleted container or database. Child containers and databases are also restored. > [!IMPORTANT] > The `Az.CosmosDB` module for Azure PowerShell version **2.0.5-preview** or later is required to access the in-account restore cmdlets. If you do not have the preview version installed, run `Install-Module -Name Az.CosmosDB -RequiredVersion 2.0.5-preview -AllowPrerelease`. :::zone pivot="api-nosql" -1. Retrieve a list of all live and deleted restorable database accounts using [`Get-AzCosmosDBRestorableDatabaseAccount`](/powershell/module/az.cosmosdb/get-azcosmosdbrestorabledatabaseaccount). +1. Retrieve a list of all live and deleted restorable database accounts by using the [Get-AzCosmosDBRestorableDatabaseAccount](/powershell/module/az.cosmosdb/get-azcosmosdbrestorabledatabaseaccount) cmdlet: ```azurepowershell Get-AzCosmosDBRestorableDatabaseAccount Use Azure PowerShell to restore a deleted container or database (including child RestorableLocations : {West US, East US} ``` - > [!NOTE] - > There are `CreationTime` or `DeletionTime` fields for the account. These same fields exist for regions too. These times allow you to choose the right region and a valid time range to use when restoring a resource. + > [!NOTE] + > The account has `CreationTime` or `DeletionTime` fields. These fields also exist for regions. These times allow you to choose the correct region and a valid time range to use when you restore a resource. -1. Use [`Get-AzCosmosDBSqlRestorableDatabase`](/powershell/module/az.cosmosdb/get-azcosmosdbsqlrestorabledatabase) to list all restorable versions of databases for live accounts. +1. Use the [Get-AzCosmosDBSqlRestorableDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbsqlrestorabledatabase) cmdlet to list all restorable versions of databases for live accounts: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Get-AzCosmosDBSqlRestorableDatabase @parameters ``` - > [!NOTE] - > Listing all the restorable database deletion events allows you to choose the right database in a scenario where the actual time of existence is unknown. If the event feed contains the **Delete** operation type in its response, then itΓÇÖs a deleted database and it can be restored within the same account. The restore timestamp can be set to any timestamp before the deletion timestamp and within the retention window. + > [!NOTE] + > Listing all the restorable database deletion events allows you to choose the right database in a scenario where the actual time of existence is unknown. If the event feed contains the **Delete** operation type in its response, then itΓÇÖs a deleted database and it can be restored within the same account. The restore timestamp can be set to any timestamp before the deletion timestamp and within the retention window. -1. Use [`Get-AzCosmosDBSqlRestorableContainer`](/powershell/module/az.cosmosdb/get-azcosmosdbsqlrestorablecontainer) list all the versions of restorable containers within a specific database. +1. Use the [Get-AzCosmosDBSqlRestorableContainer](/powershell/module/az.cosmosdb/get-azcosmosdbsqlrestorablecontainer) cmdlet to list all the versions of restorable containers within a specific database: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Get-AzCosmosDBSqlRestorableContainer @parameters ``` - > [!NOTE] - > Listing all the restorable database deletion events allows you allows you to choose the right container in a scenario where the actual time of existence is unknown. If the event feed contains the **Delete** operation type in its response, then itΓÇÖs a deleted container and it can be restored within the same account. The restore timestamp can be set to any timestamp before the deletion timestamp and within the retention window. + > [!NOTE] + > Listing all the restorable database deletion events allows you allows you to choose the right container in a scenario where the actual time of existence is unknown. If the event feed contains the **Delete** operation type in its response, then itΓÇÖs a deleted container and it can be restored within the same account. The restore timestamp can be set to any timestamp before the deletion timestamp and within the retention window. -1. Trigger a restore operation for a deleted database with `Restore-AzCosmosDBSqlDatabase`. +1. Initiate a restore operation for a deleted database by using the Restore-AzCosmosDBSqlDatabase cmdlet: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Restore-AzCosmosDBSqlDatabase @parameters ``` -1. Trigger a restore operation for a deleted container with `Restore-AzCosmosDBSqlContainer`. +1. Initiate a restore operation for a deleted container by using the Restore-AzCosmosDBSqlContainer cmdlet: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child :::zone pivot="api-mongodb" -1. Retrieve a list of all live and deleted restorable database accounts using [`Get-AzCosmosDBRestorableDatabaseAccount`](/powershell/module/az.cosmosdb/get-azcosmosdbrestorabledatabaseaccount). +1. Retrieve a list of all live and deleted restorable database accounts by using the [Get-AzCosmosDBRestorableDatabaseAccount](/powershell/module/az.cosmosdb/get-azcosmosdbrestorabledatabaseaccount) cmdlet: ```azurepowershell Get-AzCosmosDBRestorableDatabaseAccount Use Azure PowerShell to restore a deleted container or database (including child RestorableLocations : {West US, East US} ``` - > [!NOTE] - > There are `CreationTime` or `DeletionTime` fields for the account. These same fields exist for regions too. These times allow you to choose the right region and a valid time range to use when restoring a resource. + > [!NOTE] + > The account has `CreationTime` or `DeletionTime` fields. These fields also exist for regions. These times allow you to choose the correct region and a valid time range to use when you restore a resource. -1. Use [`Get-AzCosmosdbMongoDBRestorableDatabase`](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbrestorabledatabase) to list all restorable versions of databases for live accounts. +1. Use [Get-AzCosmosdbMongoDBRestorableDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbrestorabledatabase) to list all restorable versions of databases for live accounts: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Get-AzCosmosdbMongoDBRestorableDatabase @parameters ``` -1. Use [`Get-AzCosmosDBMongoDBRestorableCollection`](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbrestorablecollection) to list all the versions of restorable collections within a specific database. +1. Use the [Get-AzCosmosDBMongoDBRestorableCollection](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbrestorablecollection) cmdlet to list all the versions of restorable collections within a specific database: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Get-AzCosmosDBMongoDBRestorableCollection @parameters ``` -1. Trigger a restore operation for a deleted database with `Restore-AzCosmosDBMongoDBDatabase`. +1. Initiate a restore operation for a deleted database by using the Restore-AzCosmosDBMongoDBDatabase cmdlet: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Restore-AzCosmosDBMongoDBDatabase @parameters ``` -1. Trigger a restore operation for a deleted collection with `Restore-AzCosmosDBMongoDBCollection`. +1. Initiate a restore operation for a deleted collection by using the Restore-AzCosmosDBMongoDBCollection cmdlet: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child :::zone pivot="api-gremlin" -1. Retrieve a list of all live and deleted restorable database accounts using [`Get-AzCosmosDBRestorableDatabaseAccount`](/powershell/module/az.cosmosdb/get-azcosmosdbrestorabledatabaseaccount). +1. Retrieve a list of all live and deleted restorable database accounts by using the [Get-AzCosmosDBRestorableDatabaseAccount](/powershell/module/az.cosmosdb/get-azcosmosdbrestorabledatabaseaccount) cmdlet: ```azurepowershell Get-AzCosmosDBRestorableDatabaseAccount Use Azure PowerShell to restore a deleted container or database (including child RestorableLocations : {West US, East US} ``` - > [!NOTE] - > There are `CreationTime` or `DeletionTime` fields for the account. These same fields exist for regions too. These times allow you to choose the right region and a valid time range to use when restoring a resource. + > [!NOTE] + > The account has `CreationTime` or `DeletionTime` fields. These fields also exist for regions. These times allow you to choose the correct region and a valid time range to use when you restore a resource. -1. Use [`Get-AzCosmosdbGremlinRestorableDatabase`](/powershell/module/az.cosmosdb/get-azcosmosdbgremlinrestorabledatabase) to list all restorable versions of databases for live accounts. +1. Use the [Get-AzCosmosdbGremlinRestorableDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbgremlinrestorabledatabase) cmdlet to list all restorable versions of databases for live accounts: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Get-AzCosmosdbGremlinRestorableDatabase @parameters ``` -1. Use [`Get-AzCosmosdbGremlinRestorableGraph`](/powershell/module/az.cosmosdb/get-azcosmosdbgremlinrestorablegraph) to list all the versions of restorable graphs within a specific database. +1. Use the [Get-AzCosmosdbGremlinRestorableGraph](/powershell/module/az.cosmosdb/get-azcosmosdbgremlinrestorablegraph) cmdlet to list all versions of restorable graphs that are in a specific database: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Get-AzCosmosdbGremlinRestorableGraph @parameters ``` -1. Trigger a restore operation for a deleted database with `Restore-AzCosmosDBGremlinDatabase`. +1. Initiate a restore operation for a deleted database by using the Restore-AzCosmosDBGremlinDatabase cmdlet: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Restore-AzCosmosDBGremlinDatabase @parameters ``` -1. Trigger a restore operation for a deleted graph with `Restore-AzCosmosDBGremlinGraph`. +1. Initiate a restore operation for a deleted graph by using the Restore-AzCosmosDBGremlinGraph cmdlet: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child :::zone pivot="api-table" -1. Retrieve a list of all live and deleted restorable database accounts using [`Get-AzCosmosDBRestorableDatabaseAccount`](/powershell/module/az.cosmosdb/get-azcosmosdbrestorabledatabaseaccount). +1. Retrieve a list of all live and deleted restorable database accounts by using the [Get-AzCosmosDBRestorableDatabaseAccount](/powershell/module/az.cosmosdb/get-azcosmosdbrestorabledatabaseaccount) cmdlet: ```azurepowershell Get-AzCosmosDBRestorableDatabaseAccount Use Azure PowerShell to restore a deleted container or database (including child RestorableLocations : {West US, East US} ``` - > [!NOTE] - > There are `CreationTime` or `DeletionTime` fields for the account. These same fields exist for regions too. These times allow you to choose the right region and a valid time range to use when restoring a resource. + > [!NOTE] + > The account has `CreationTime` or `DeletionTime` fields. These fields also exist for regions. These times allow you to choose the correct region and a valid time range to use when you restore a resource. -1. Use [`Get-AzCosmosdbTableRestorableTable`](/powershell/module/az.cosmosdb/get-azcosmosdbtablerestorabletable) to list all restorable versions of tables for live accounts. +1. Use the [Get-AzCosmosdbTableRestorableTable](/powershell/module/az.cosmosdb/get-azcosmosdbtablerestorabletable) cmdlet to list all restorable versions of tables for live accounts: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child Get-AzCosmosdbTableRestorableTable @parameters ``` -1. Trigger a restore operation for a deleted table with `Restore-AzCosmosDBTable`. +1. Initiate a restore operation for a deleted table by using the Restore-AzCosmosDBTable cmdlet: ```azurepowershell $parameters = @{ Use Azure PowerShell to restore a deleted container or database (including child ### [Azure Resource Manager template](#tab/azure-resource-manager) -You can restore deleted containers and databases using an Azure Resource Manager template. +You can restore deleted containers and databases by using an Azure Resource Manager template. -1. Create or locate an Azure Cosmos DB resource in your template. Here's a generic example of a resource. +1. Create or locate an Azure Cosmos DB resource in your template. Here's a generic example of a resource: ```json { You can restore deleted containers and databases using an Azure Resource Manager } ``` -1. Update the Azure Cosmos DB resource in your template by: +1. To update the Azure Cosmos DB resource in your template: - - Setting `properties.createMode` to `restore`. - - Defining a `properties.restoreParameters` object. - - Setting `properties.restoreParameters.restoreTimestampInUtc` to a UTC timestamp. - - Setting `properties.restoreParameters.restoreSource` to the **instance identifier** of the account that is the source of the restore operation. + - Set `properties.createMode` to `restore`. + - Define a `properties.restoreParameters` object. + - Set `properties.restoreParameters.restoreTimestampInUtc` to a UTC time stamp. + - Set `properties.restoreParameters.restoreSource` to the **instance identifier** of the account that is the source of the restore operation. :::zone pivot="api-nosql" You can restore deleted containers and databases using an Azure Resource Manager :::zone-end - > [!NOTE] - > Use [`az cosmosdb restorable-database-account list`](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list) to retrieve a list of instance identifiers for all live and deleted restorable database accounts. + > [!NOTE] + > Use [az cosmosdb restorable-database-account list](/cli/azure/cosmosdb/restorable-database-account#az-cosmosdb-restorable-database-account-list) to retrieve a list of instance identifiers for all live and deleted restorable database accounts. -1. Deploy the template using [`az deployment group create`](/cli/azure/deployment/group#az-deployment-group-create). +1. Deploy the template by using [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create): ```azurecli-interactive az deployment group create \ You can restore deleted containers and databases using an Azure Resource Manager ## Track the status of a restore operation -When a point-in-time restore is triggered for a deleted container or database, the operation is identified as an **InAccount** restore operation on the resource. +When a point-in-time restore is initiated for a deleted container or database, the operation is identified as an **InAccount** restore operation on the resource. ### [Azure portal](#tab/azure-portal) -To get a list of restore operations for a specific resource, filter the Activity Log of the account using the search filter `InAccount Restore Deleted` and a time filter. The resulting list includes the `UserPrincipalName` field that identifies the user that triggered the restore operation. For more information on how to access activity logs, see [Auditing point-in-time restore actions](audit-restore-continuous.md#audit-the-restores-that-were-triggered-on-a-live-database-account). +To get a list of restore operations for a specific resource, filter the activity log of the account by using the **InAccount Restore Deleted** search filter and a time filter. The list that's returns includes the **UserPrincipalName** field, which identifies the user who initiated the restore operation. For more information about how to access activity logs, see [Audit point-in-time restore actions](audit-restore-continuous.md#audit-the-restores-that-were-triggered-on-a-live-database-account). -### [Azure CLI / Azure PowerShell / Azure Resource Manager template](#tab/azure-cli+azure-powershell+azure-resource-manager) +### [Azure CLI](#tab/azure-cli) ++Currently, to get the activity log of the account, you must use the Azure portal. Use the **InAccount Restore Deleted** search filter and a time filter. ++### [Azure PowerShell](#tab/azure-powershell) ++Currently, to get the activity log of the account, you must use the Azure portal. Use the **InAccount Restore Deleted** search filter and a time filter. ++### [Azure Resource Manager template](#tab/azure-resource-manager) -At present portal is used for getting the activity log of the account using the search filter `InAccount Restore Deleted` and a time filter. +Currently, to get the activity log of the account, you must use the Azure portal. Use the **InAccount Restore Deleted** search filter and a time filter. ## Next steps -- Enable continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).-- [How to migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md).-- [Continuous backup mode resource model.](continuous-backup-restore-resource-model.md)-- [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.+- Enable continuous backup by using the [Azure portal](provision-account-continuous-backup.md#provision-portal), [Azure PowerShell](provision-account-continuous-backup.md#provision-powershell), the [Azure CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). +- Learn how to [migrate an account from periodic backup to continuous backup](migrate-continuous-backup.md). +- Review the [continuous backup mode resource model](continuous-backup-restore-resource-model.md). +- [Manage the permissions](continuous-backup-restore-permissions.md) that are required to restore data by using continuous backup mode. |
cosmos-db | Intra Account Container Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md | Title: Intra-account container copy jobs -description: Copy container data between containers within an account in Azure Cosmos DB. +description: Learn how to copy container data between containers within an account in Azure Cosmos DB (preview). Last updated 11/30/2022 -# Intra-account container copy jobs in Azure Cosmos DB (Preview) +# Intra-account container copy jobs in Azure Cosmos DB (preview) [!INCLUDE[NoSQL, Cassandra, MongoDB](includes/appliesto-nosql-mongodb-cassandra.md)] -You can perform offline container copy within an Azure Cosmos DB account using container copy jobs. +You can perform offline container copy within an Azure Cosmos DB account by using container copy jobs. -You may need to copy data within your Azure Cosmos DB account if you want to achieve any of these scenarios: +You might need to copy data within your Azure Cosmos DB account if you want to achieve any of these scenarios: * Copy all items from one container to another.-* Change the [granularity at which throughput is provisioned - from database to container](set-throughput.md) and vice-versa. +* Change the [granularity at which throughput is provisioned, from database to container](set-throughput.md) and vice versa. * Change the [partition key](partitioning-overview.md#choose-partitionkey) of a container. * Update the [unique keys](unique-keys.md) for a container.-* Rename a container/database. -* Adopt new features that are only supported on new containers. +* Rename a container or database. +* Adopt new features that are supported only for new containers. -Intra-account container copy jobs can be [created and managed using CLI commands](how-to-container-copy.md). +Intra-account container copy jobs can be [created and managed by using Azure CLI commands](how-to-container-copy.md). ## Get started +To get started, register for the relevant preview feature in the Azure portal. + ### NoSQL and Cassandra API-To get started with intra-account offline container copy for NoSQL and Cassandra API accounts, register for **"Intra-account offline container copy (Cassandra & NoSQL)"** preview feature flag from the ['Preview Features'](access-previews.md) list in the Azure portal. Once the registration is complete, the preview is effective for all Cassandra and API for NoSQL accounts in the subscription. ++To get started with intra-account offline container copy for NoSQL and Cassandra API accounts, register for the **Intra-account offline container copy (Cassandra & NoSQL)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. When the registration is complete, the preview is effective for all Cassandra and API for NoSQL accounts in the subscription. ### API for MongoDB-To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for **"Intra-account offline container copy (MongoDB)"** preview feature flag from the ['Preview Features'](access-previews.md) list in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription. -## How to do container copy? +To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline container copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription. ++<a name="how-to-do-container-copy"></a> -1. Create the target Azure Cosmos DB container with the desired settings (partition key, throughput granularity, RUs, unique key, etc.). -2. Stop the operations on the source container by pausing the application instances or any clients connecting to it. -3. [Create the container copy job](how-to-container-copy.md). -4. [Monitor the progress of the container copy job](how-to-container-copy.md#monitor-the-progress-of-a-container-copy-job) and wait until it's completed. -5. Resume the operations by appropriately pointing the application or client to the source or target container copy as intended. +## Copy a container ++1. Create the target Azure Cosmos DB container by using the settings that you want to use (partition key, throughput granularity, request units, unique key, and so on). +1. Stop the operations on the source container by pausing the application instances or any clients that connect to it. +1. [Create the container copy job](how-to-container-copy.md). +1. [Monitor the progress of the container copy job](how-to-container-copy.md#monitor-the-progress-of-a-container-copy-job) and wait until it's completed. +1. Resume the operations by appropriately pointing the application or client to the source or target container copy as intended. ## How does intra-account container copy work? -Intra-account container copy jobs perform offline data copy using the source container's incremental change feed log. +Intra-account container copy jobs perform offline data copy by using the source container's incremental change feed log. -* The platform allocates server-side compute instances for the Azure Cosmos DB account. -* These instances are allocated when one or more container copy jobs are created within the account. -* The container copy jobs run on these instances. -* A single job is executed across all instances at any time. -* The instances are shared by all the container copy jobs running within the same account. -* The platform may deallocate the instances if they're idle for >15 mins. +1. The platform allocates server-side compute instances for the Azure Cosmos DB account. +1. These instances are allocated when one or more container copy jobs are created within the account. +1. The container copy jobs run on these instances. +1. A single job is executed across all instances at any time. +1. The instances are shared by all the container copy jobs that are running within the same account. +1. The platform might deallocate the instances if they're idle for longer than 15 minutes. > [!NOTE]-> We currently only support offline container copy jobs. So, we strongly recommend to stop performing any operations on the source container prior to beginning the container copy. Item deletions and updates done on the source container after beginning the copy job may not be captured. Hence, continuing to perform operations on the source container while the container job is in progress may result in additional or missing data on the target container. +> We currently support only offline container copy jobs. We strongly recommend that you stop performing any operations on the source container before you begin the container copy. Item deletions and updates that are done on the source container after you start the copy job might not be captured. If you continue to perform operations on the source container while the container job is in progress, you might have duplicate or missing data on the target container. -## Factors affecting the rate of a container copy job +## Factors that affect the rate of a container copy job The rate of container copy job progress is determined by these factors: -* Source container/database throughput setting. +* The source container or database throughput setting. -* Target container/database throughput setting. +* The target container or database throughput setting. - > [!TIP] - > Set the target container throughput to at least two times the source container's throughput. + > [!TIP] + > Set the target container throughput to at least two times the source container's throughput. -* Server-side compute instances allocated to the Azure Cosmos DB account for performing the data transfer. +* Server-side compute instances that are allocated to the Azure Cosmos DB account for performing the data transfer. - > [!IMPORTANT] - > The default SKU offers two 4-vCPU 16-GB server-side instances per account. + > [!IMPORTANT] + > The default SKU offers two 4-vCPU 16-GB server-side instances per account. ## Limitations ### Preview eligibility criteria -Container copy jobs don't work with accounts having following capabilities enabled. You will need to disable these features before running the container copy jobs. --- [Disable local auth](how-to-setup-rbac.md#use-azure-resource-manager-templates)-- [Merge partition](merge.md).+Container copy jobs don't work with accounts that have the following capabilities enabled. Disable these features before you run container copy jobs: +* [Disable local auth](how-to-setup-rbac.md#use-azure-resource-manager-templates) +* [Merge partition](merge.md) -### Account Configurations --- The time-to-live (TTL) setting is not adjusted in the destination container. As a result, if a document has not expired in the source container, it will start its countdown anew in the destination container.+### Account configurations +The Time to Live (TTL) setting isn't adjusted in the destination container. As a result, if a document hasn't expired in the source container, it starts its countdown anew in the destination container. ## FAQs -### Is there an SLA for the container copy jobs? +### Is there a service-level agreement for container copy jobs? -Container copy jobs are currently supported on best-effort basis. We don't provide any SLA guarantees for the time taken to complete these jobs. +Container copy jobs are currently supported on a best-effort basis. We don't provide any service-level agreement (SLA) guarantees for the time it takes for the jobs to finish. ### Can I create multiple container copy jobs within an account? -Yes, you can create multiple jobs within the same account. The jobs run consecutively. You can [list all the jobs](how-to-container-copy.md#list-all-the-container-copy-jobs-created-in-an-account) created within an account and monitor their progress. +Yes, you can create multiple jobs within the same account. The jobs run consecutively. You can [list all the jobs](how-to-container-copy.md#list-all-the-container-copy-jobs-created-in-an-account) that are created within an account, and monitor their progress. ### Can I copy an entire database within the Azure Cosmos DB account? You must create a job for each container in the database. ### I have an Azure Cosmos DB account with multiple regions. In which region will the container copy job run? -The container copy job runs in the write region. If there are accounts configured with multi-region writes, the job runs in one of the regions from the list. +The container copy job runs in the write region. In an account that's configured with multi-region writes, the job runs in one of the regions in the list of write regions. ### What happens to the container copy jobs when the account's write region changes? -The account's write region may change in the rare scenario of a region outage or due to manual failover. In such a scenario, incomplete container copy jobs created within the account would fail. You would need to recreate these failed jobs. Recreated jobs would then run in the new (current) write region. -+The account's write region might change in the rare scenario of a region outage or due to manual failover. In this scenario, incomplete container copy jobs that were created within the account fail. You would need to re-create these failed jobs. Re-created jobs then run in the new (current) write region. ## Supported regions Currently, container copy is supported in the following regions: -| **Americas** | **Europe and Africa** | **Asia Pacific** | +| Americas | Europe and Africa | Asia Pacific | | | -- | -- | | Brazil South | France Central | Australia Central | | Canada Central | France South | Australia Central 2 | Currently, container copy is supported in the following regions: | East US 2 | Norway West | Southeast Asia | | East US 2 EUAP | Switzerland North | UAE Central | | North Central US | Switzerland West | West India |-| South Central US | UK South | | -| West Central US | UK West | | -| West US | West Europe | -| West US 2 | | +| South Central US | UK South | Not supported | +| West Central US | UK West | Not supported | +| West US | West Europe | Not supported | +| West US 2 | Not supported | Not supported | ++## Known and common issues -## Known/common issues +* Error - Owner resource doesn't exist. -* Error - Owner resource doesn't exist + If the job creation fails and displays the error *Owner resource doesn't exist* (error code 404), either the target container hasn't been created yet or the container name that's used to create the job doesn't match an actual container name. - If the job creation fails with the error *"Owner resource doesn't exist"*, it means that the target container wasn't created or was mis-spelt. - Make sure the target container is created before running the job as specified in the [overview section.](#how-to-do-container-copy) + Make sure that the target container is created before you run the job as specified in the [overview](#how-to-do-container-copy), and ensure that the container name in the job matches an actual container name. ```output "code": "404", Currently, container copy is supported in the following regions: * Error - Request is unauthorized. - If the request fails with error Unauthorized (401), this could happen because Local Authorization is disabled, see [disable local auth](how-to-setup-rbac.md#use-azure-resource-manager-templates). Container copy jobs use primary key to authenticate and if local authorization is disabled, the job creation fails. You need to enable local authorization for container copy jobs to work. + If the request fails and displays the error *Unauthorized* (error code 401), local authorization might be disabled. Learn how to [enable local authorization](how-to-setup-rbac.md#use-azure-resource-manager-templates). ++ Container copy jobs use primary keys to authenticate. If local authorization is disabled, the job creation fails. Local authorization must be enabled for container copy jobs to work. ```output "code": "401", Currently, container copy is supported in the following regions: * Error - Error while getting resources for job. - This error can occur due to internal server issues. To resolve this issue, contact Microsoft support by raising a **New Support Request** from the Azure portal. Set the Problem Type as **'Data Migration'** and Problem subtype as **'Intra-account container copy'**. + This error might occur due to internal server issues. To resolve this issue, contact Microsoft Support by opening a **New Support Request** in the Azure portal. For **Problem Type**, select **Data Migration**. For **Problem subtype**, select **Intra-account container copy**. ```output "code": "500" "message": "Error while getting resources for job, StatusCode: 500, SubStatusCode: 0, OperationId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, ActivityId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx- ``` - + ``` ## Next steps -* You can learn [how to create, monitor and manage container copy jobs within Azure Cosmos DB account using CLI commands](how-to-container-copy.md). +* Learn [how to create, monitor, and manage container copy jobs](how-to-container-copy.md) within Azure Cosmos DB account by using CLI commands. |
cosmos-db | Feature Support 42 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md | Title: 4.2 server version supported features and syntax in Azure Cosmos DB for MongoDB -description: Learn about Azure Cosmos DB for MongoDB 4.2 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported. +description: Learn about Azure Cosmos DB for MongoDB 4.2 server version supported features and syntax. Learn about supported database commands, query language support, data types, aggregation pipeline commands, and operators. Last updated 10/12/2022 -# Azure Cosmos DB for MongoDB (4.2 server version): supported features and syntax +# Azure Cosmos DB for MongoDB (4.2 server version): Supported features and syntax [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] -Azure Cosmos DB is Microsoft's globally distributed multi-model database service, offering [multiple database APIs](../choose-api.md). You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol). +Azure Cosmos DB is the Microsoft globally distributed multi-model database service. Azure Cosmos DB offers [multiple database APIs](../choose-api.md). You can communicate with Azure Cosmos DB for MongoDB by using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). Azure Cosmos DB for MongoDB supports the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol). -By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more. +By using Azure Cosmos DB for MongoDB, you can enjoy the benefits of MongoDB that you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more. -## Protocol Support +## Protocol support -The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`. +The supported operators and any limitations or exceptions are listed in this article. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB for MongoDB accounts, the 3.6+ version of accounts has an endpoint in the format `*.mongo.cosmos.azure.com`. The 3.2 version of accounts has an endpoint in the format `*.documents.azure.com`. > [!NOTE]-> This article only lists the supported server commands, and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with the Azure Cosmos DB for MongoDB. +> This article lists only the supported server commands, and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally use the `delete()` and `update()` server commands. Functions that use supported server commands are compatible with Azure Cosmos DB for MongoDB. ## Query language support -Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options. +Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. In the following sections, you can find the detailed list of currently supported operations, operators, stages, commands, and options. ## Database commands -Azure Cosmos DB for MongoDB supports the following database commands: +Azure Cosmos DB for MongoDB supports the following database commands. ### Query and write operation commands Azure Cosmos DB for MongoDB supports the following database commands: ### Transaction commands > [!NOTE]-> Multi-document transactions are only supported within a single non-sharded collection. Cross-collection and cross-shard multi-document transactions are not yet supported in the API for MongoDB. +> Multi-document transactions are supported only within a single non-sharded collection. Cross-collection and cross-shard multi-document transactions are not yet supported in the API for MongoDB. | Command | Supported | | - | | Azure Cosmos DB for MongoDB supports the following database commands: | `top` | No | | `whatsmyuri` | Yes | -## <a name="aggregation-pipeline"></a>Aggregation pipeline +<a name="aggregation-pipeline"></a> ++## Aggregation pipeline ++Azure Cosmos DB for MongoDB supports the following aggregation commands. ### Aggregation commands Azure Cosmos DB for MongoDB supports the following database commands: | `unwind` | Yes | > [!NOTE]-> The `$lookup` aggregation does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields. +> The `$lookup` aggregation does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature that's introduced in server version 3.6. If you attempt to use the `$lookup` operator with the `let` and `pipeline` fields, an error message that indicates that *`let` is not supported* appears. ### Boolean expressions Azure Cosmos DB for MongoDB supports the following database commands: ### Comparison expressions > [!NOTE]-> The API for MongoDB does not support comparison expressions with an array literal in the query. +> The API for MongoDB does not support comparison expressions that have an array literal in the query. | Command | Supported | | - | | Azure Cosmos DB for MongoDB supports the following database commands: ## Data types -Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. Versions 4.0 and higher (4.0+) enhance the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from this optimization. +Azure Cosmos DB for MongoDB supports documents that are encoded in MongoDB BSON format. Versions 4.0 and later (4.0+) enhance the internal usage of this format to improve performance and reduce costs. Documents that are written or updated through an endpoint running 4.0+ benefit from this optimization. -In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ won't benefit from the enhanced performance until they're updated via a write operation through the 4.0+ endpoint. +In an [upgrade scenario](upgrade-version.md), documents that were written prior to the upgrade to version 4.0+ won't benefit from the enhanced performance until they're updated via a write operation through the 4.0+ endpoint. -16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it can't be disabled. This feature isn't compatible with the Azure Synapse Link feature and/or Continuous Backup. +16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit applies only to collections that are created after this feature is enabled. When this feature is enabled for your database account, it can't be disabled. This feature isn't compatible with Azure Synapse Link for Azure Cosmos DB or with continuous backup. -Enabling 16 MB can be done in the features tab in the Azure portal or programmatically by [adding the `EnableMongo16MBDocumentSupport` capability](how-to-configure-capabilities.md). +To enable 16-MB document support, change the setting on the **Features** tab for the resource in the Azure portal or programmatically [add the `EnableMongo16MBDocumentSupport` capability](how-to-configure-capabilities.md). -We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance. +We recommend that you enable Server Side Retry and avoid using wildcard indexes to ensure that requests in larger documents succeed. Raising your database or collection request units might also help performance. | Command | Supported | | - | | We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure ## Indexes and index properties +Azure Cosmos DB for MongoDB supports the following index commands and index properties. + ### Indexes | Command | Supported | We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure | | - | | `TTL` | Yes | | `Unique` | Yes |-| `Partial` | Only supported with unique indexes | +| `Partial` | Supported only for unique indexes | | `Case Insensitive` | No | | `Sparse` | No | | `Background` | Yes | ## Operators +Azure Cosmos DB for MongoDB supports the following operators. + ### Logical operators | Command | Supported | We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure | `jsonSchema` | No | | `mod` | Yes | | `regex` | Yes |-| `text` | No (Not supported. Use $regex instead.) | +| `text` | No (Not supported. Use `$regex` instead.) | | `where` | No | -In the $regex queries, left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions. +In `$regex` queries, left-anchored expressions allow index search. However, using the `i` modifier (case-insensitivity) and the `m` modifier (multiline) causes the collection to scan in all expressions. ++When there's a need to include `$` or `|`, it's best to create two (or more) `$regex` queries. ++For example, change the following original query: ++`find({x:{$regex: /^abc$/})` -When there's a need to include '$' or '|', it's best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows: +To this query: `find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})` -The first part will use the index to restrict the search to those documents beginning with ^abc and the second part will match the exact entries. The bar operator '|' acts as an "or" function - the query `find({x:{$regex: /^abc |^def/})` matches the documents in which field 'x' has values that begin with "abc" or "def". To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: `find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })`. +The first part of the modified query uses the index to restrict the search to documents that begin with `^abc`. The second part of the query matches the exact entries. The bar operator (`|`) acts as an "or" function. The query `find({x:{$regex: /^abc |^def/})` matches the documents in which field `x` has values that begin with `abc` or `def`. To use the index, we recommend that you break the query into two different queries that are joined by the `$or` operator: `find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })`. ### Array operators The first part will use the index to restrict the search to those documents begi ## Sort operations -When you use the `findOneAndUpdate` operation, sort operations on a single field are supported, but sort operations on multiple fields aren't supported. +When you use the `findOneAndUpdate` operation, sort operations on a single field are supported. Sort operations on multiple fields aren't supported. ## Indexing The API for MongoDB [supports various indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness. -## Client-side field level encryption +## Client-side field-level encryption -Client-level field encryption is a driver feature and is compatible with the API for MongoDB. Explicit encryption - were the driver explicitly encrypts each field when written is supported. Automatic encryption isn't supported. Explicit decryption and automatic decryption is supported. +Client-level field encryption is a driver feature and is compatible with the API for MongoDB. Explicit encryption, in which the driver explicitly encrypts each field when it's written, is supported. Automatic encryption isn't supported. Explicit decryption and automatic decryption is supported. -The mongocryptd shouldn't be run since it isn't needed to perform any of the supported operations. +The `mongocryptd` shouldn't be run because it isn't needed to perform any of the supported operations. ## GridFS Azure Cosmos DB supports GridFS through any GridFS-compatible Mongo driver. ## Replication -Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB doesn't support manual replication commands. +Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is also extended to achieve low-latency, global replication. Azure Cosmos DB doesn't support manual replication commands. -## Retryable Writes +## Retryable writes -The Retryable writes feature enables MongoDB drivers to automatically retry certain write operations. The feature results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement. +The retryable writes feature enables MongoDB drivers to automatically retry certain write operations. The feature results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections require the shard key to be included in the query filter or update statement. -For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field **city** = `"NYC"`, the application will need to execute the operation for all shard key (country) values if the Retryable writes feature is enabled. +For example, with a sharded collection that's sharded on the `"country"` key, to delete all the documents that have the field `"city" = "NYC"`, the application needs to execute the operation for all shard key (`"country"`) values if the retryable writes feature is enabled. -- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` ΓÇô **Success**+- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` - **Success** - `db.coll.deleteMany({"city": "NYC"})` - Fails with error **ShardKeyNotFound(61)** > [!NOTE]-> Retryable writes does not support bulk unordered writes at this time. If you would like to perform bulk writes with retryable writes enabled, perform bulk ordered writes. +> Retryable writes does not support bulk unordered writes at this time. If you want to perform bulk writes with retryable writes enabled, perform bulk ordered writes. -To enable the feature, [add the `EnableMongoRetryableWrites` capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal. +To enable the feature, [add the EnableMongoRetryableWrites capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled on the **Features** tab in the Azure portal. ## Sharding -Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB doesn't support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data. +Azure Cosmos DB supports automatic, server-side sharding. It automatically manages shard creation, placement, and balancing. Azure Cosmos DB doesn't support manual sharding commands, which means that you don't have to invoke commands like `addShard`, `balancerStart`, and `moveChunk`. You need to specify the shard key only when you create the containers or query the data. ## Sessions Azure Cosmos DB doesn't yet support server-side sessions commands. -## Time-to-live (TTL) +## Time to Live -Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the document. TTL can be enabled for collections from the [Azure portal](https://portal.azure.com). +Azure Cosmos DB supports a Time to Live (TTL) that's based on the time stamp of the document. You can enable TTL for a collection in the [Azure portal](https://portal.azure.com). -#### Custom Time-To-Live (TTL) +### Custom TTL -This feature provides the ability to set a customer TTL on any one field in a collection. +This feature provides the ability to set a custom TTL on any one field in a collection. -On a collection with TTL enabled on a field: +On a collection that has TTL enabled on a field: -Acceptable types are: BSON date type and numeric types (integer, long, double) which will be interpreted as a unix milliseconds timestamp, for the purpose of expiration. +- Acceptable types are the BSON data type and numeric types (integer, long, or double), which will be interpreted as a Unix millisecond time stamp to determine expiration. - If the TTL field is an array, then the smallest element of the array that is of an acceptable type is considered for document expiry. -- If the TTL field is missing from a document, the document wonΓÇÖt expire.+- If the TTL field is missing from a document, the document doesnΓÇÖt expire. -- If the TTL field is not an acceptable type, the document will not expire.+- If the TTL field isn't an acceptable type, the document doesn't expire. -##### Limitations of Custom TTL +#### Limitations of a custom TTL - Only one field in a collection can have a TTL set on it. -- With a custom TTL field set, the \_ts field cannot be used for document expiration+- With a custom TTL field set, the `\_ts` field can't be used for document expiration. -- Is `\_ts` field in addition possible? No+- You can't use the `\_ts` field in addition. -##### Configuration +#### Configuration -This feature can be enabled by updating the account capability "EnableTtlOnCustomPath". Refer [how to configure capabilities](../../cosmos-db/mongodb/how-to-configure-capabilities.md) +You can enable a custom TTL by updating the `EnableTtlOnCustomPath` capability for the account. Learn [how to configure capabilities](../../cosmos-db/mongodb/how-to-configure-capabilities.md). -#### To set up the TTL: +### Set up the TTL -- `db.coll.createIndex({"YOUR_CUSTOM_TTL_FIELD":1}, {expireAfterSeconds: 10})`+To set up the TTL, run this command: `db.coll.createIndex({"YOUR_CUSTOM_TTL_FIELD":1}, {expireAfterSeconds: 10})` ## Transactions Multi-document transactions are supported within an unsharded collection. Multi-document transactions aren't supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds. -## User and role management +## Manage users and roles -Azure Cosmos DB doesn't yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page). +Azure Cosmos DB doesn't yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords and keys that can be obtained through the [Azure portal](https://portal.azure.com) (on the **Connection Strings** page). -## Write Concern +## Write concerns -Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background, all writes are automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md). +Some applications rely on a [write concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses that are required during a write operation. Due to how Azure Cosmos DB handles replication in the background, all writes are automatically Quorum by default. Any write concern that's specified by the client code is ignored. Learn how to [use consistency levels to maximize availability and performance](../consistency-levels.md). ## Next steps Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/refe - Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB. - Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md). - - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md). + - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units by using vCores or vCPUs](../convert-vcore-to-request-unit.md). + - If you know typical request rates for your current database workload, read about [estimating request units by using the Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md). |
cosmos-db | How To Configure Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md | Title: Configure your Azure Cosmos DB for MongoDB account capabilities -description: Learn how to configure your API for MongoDB account capabilities +description: Learn how to configure your API for MongoDB account capabilities. -Capabilities are features that can be added or removed to your API for MongoDB account. Many of these features affect account behavior so it's important to be fully aware of the effect a capability will have before enabling or disabling it. Several capabilities are set on API for MongoDB accounts by default, and can't be changed or removed. One example is the EnableMongo capability. This article will demonstrate how to enable and disable a capability. +Capabilities are features that can be added or removed to your API for MongoDB account. Many of these features affect account behavior, so it's important to be fully aware of the effect a capability has before you enable or disable it. Several capabilities are set on API for MongoDB accounts by default and can't be changed or removed. One example is the `EnableMongo` capability. This article demonstrates how to enable and disable a capability. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://aka.ms/trycosmosdb).-- Azure Cosmos DB for MongoDB account. [Create an API for MongoDB account](quickstart-nodejs.md#create-an-azure-cosmos-db-account).-- [Azure Command-Line Interface (CLI)](/cli/azure/) or Azure portal access. Changing capabilities via ARM is not supported.+- An Azure Cosmos DB for MongoDB account. [Create an API for MongoDB account](quickstart-nodejs.md#create-an-azure-cosmos-db-account). +- [Azure CLI](/cli/azure/) or Azure portal access. Changing capabilities via Azure Resource Manager isn't supported. ## Available capabilities | Capability | Description | Removable | | -- | -- | |-| `DisableRateLimitingResponses` | Allows Mongo API to retry rate-limiting requests on the server-side until max-request-timeout | Yes | -| `EnableMongoRoleBasedAccessControl` | Enable support for creating Users/Roles for native MongoDB role-based access control | No | -| `EnableMongoRetryableWrites` | Enables support for retryable writes on the account | Yes | -| `EnableMongo16MBDocumentSupport` | Enables support for inserting documents upto 16 MB in size | No | -| `EnableUniqueCompoundNestedDocs` | Enables support for compound and unique indexes on nested fields, as long as the nested field is not an array. | No | -| `EnableTtlOnCustomPath` | Provides the ability to set a custom TTL on any one field in a collection | No | -| `EnablePartialUniqueIndex` | Enables support for unique partial index which allows you more flexibility to specify exactly which fields in documents you'd like to index. | No | +| `DisableRateLimitingResponses` | Allows Mongo API to retry rate-limiting requests on the server side until the value that's set for `max-request-timeout`. | Yes | +| `EnableMongoRoleBasedAccessControl` | Enable support for creating users and roles for native MongoDB role-based access control. | No | +| `EnableMongoRetryableWrites` | Enables support for retryable writes on the account. | Yes | +| `EnableMongo16MBDocumentSupport` | Enables support for inserting documents up to 16 MB in size. | No | +| `EnableUniqueCompoundNestedDocs` | Enables support for compound and unique indexes on nested fields if the nested field isn't an array. | No | +| `EnableTtlOnCustomPath` | Provides the ability to set a custom Time to Live (TTL) on any one field in a collection. | No | +| `EnablePartialUniqueIndex` | Enables support for a unique partial index, so you have more flexibility to specify exactly which fields in documents you'd like to index. | No | ## Enable a capability -1. Retrieve your existing account capabilities by using [**az cosmosdb show**](/cli/azure/cosmosdb#az-cosmosdb-show): +1. Retrieve your existing account capabilities by using [az cosmosdb show](/cli/azure/cosmosdb#az-cosmosdb-show): ```azurecli-interactive az cosmosdb show \ Capabilities are features that can be added or removed to your API for MongoDB a --name <azure_cosmos_db_account_name> ``` - You should see a capability section similar to this output: + You should see a capability section that's similar to this example output: ```json "capabilities": [ Capabilities are features that can be added or removed to your API for MongoDB a ] ``` - Review the default capability. In this example, we have just `EnableMongo`. + Review the default capability. In this example, the only capability that's set is `EnableMongo`. -1. Set the new capability on your database account. The list of capabilities should include the list of previously enabled capabilities, since only the explicitly named capabilities will be set on your account. For example, if you want to add the capability `DisableRateLimitingResponses`, you would use the [**az cosmosdb update**](/cli/azure/cosmosdb#az-cosmosdb-update) command with the `--capabilities` parameter: +1. Set the new capability on your database account. The list of capabilities should include the list of previously enabled capabilities that you want to keep. ++ Only explicitly named capabilities are set on your account. For example, if you want to add the `DisableRateLimitingResponses` capability to the preceding example, use the [az cosmosdb update](/cli/azure/cosmosdb#az-cosmosdb-update) command with the `--capabilities` parameter, and list all capabilities that you want to have in your account: ```azurecli-interactive az cosmosdb update \ Capabilities are features that can be added or removed to your API for MongoDB a ``` > [!IMPORTANT]- > The list of capabilities must always specify all capabilities you wish to enable, inclusively. This includes capabilities already enabled for the account. In this example, the `EnableMongo` capability was already enabled, so both the `EnableMongo` and `DisableRateLimitingResponses` capabilities must be specified. + > The list of capabilities must always specify *all* capabilities that you want to enable, inclusively. This includes capabilities that are already enabled for the account that you want to keep. In this example, the `EnableMongo` capability was already enabled, so you must specify both the `EnableMongo` capability and the `DisableRateLimitingResponses` capability. > [!TIP]- > If you're using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities: + > If you're using PowerShell and an error message appears when you use the preceding command, instead try using a PowerShell array to list the capabilities: > > ```azurecli > az cosmosdb update \ Capabilities are features that can be added or removed to your API for MongoDB a ## Disable a capability -1. Retrieve your existing account capabilities by using **az cosmosdb show**: +1. Retrieve your existing account capabilities by using `az cosmosdb show`: ```azurecli-interactive az cosmosdb show \ Capabilities are features that can be added or removed to your API for MongoDB a --name <azure_cosmos_db_account_name> ``` - You should see a capability section similar to this output: + You should see a capability section that's similar to this example output: ```json "capabilities": [ Capabilities are features that can be added or removed to your API for MongoDB a ] ``` - Observe each of these capabilities. In this example, we have `EnableMongo` and `DisableRateLimitingResponses`. + Check for all capabilities that are currently set. In this example, two capabilities are set: `EnableMongo` and `DisableRateLimitingResponses`. ++1. Remove one of the capabilities from your database account. The list of capabilities should include the list of previously enabled capabilities that you want to keep. -1. Remove the capability from your database account. The list of capabilities should include the list of previously enabled capabilities you want to keep, since only the explicitly named capabilities will be set on your account. For example, if you want to remove the capability `DisableRateLimitingResponses`, you would use the **az cosmosdb update** command: + Only explicitly named capabilities are set on your account. For example, if you want to remove the `DisableRateLimitingResponses` capability, you would use the `az cosmosdb update` command, and list the capability that you want to keep: ```azurecli-interactive az cosmosdb update \ Capabilities are features that can be added or removed to your API for MongoDB a ``` > [!TIP]- > If you're using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities: + > If you're using PowerShell and an error message appears when you use this command, instead try using a PowerShell array to list the capabilities: > > ```azurecli > az cosmosdb update \ Capabilities are features that can be added or removed to your API for MongoDB a - Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB. - Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md). - - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md). + - If all you know is the number of vCores and servers in your existing database cluster, learn how to [estimate request units by using vCores or vCPUs](../convert-vcore-to-request-unit.md). + - If you know typical request rates for your current database workload, learn how to [estimate request units by using the Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md). |
cosmos-db | Vector Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md | Title: Vector search on embeddings -description: Use vector indexing and search to integrate AI-based applications in Azure Cosmos DB for MongoDB vCore +description: Use vector indexing and search to integrate AI-based applications in Azure Cosmos DB for MongoDB vCore. -# Using vector search on embeddings in Azure Cosmos DB for MongoDB vCore +# Use vector search on embeddings in Azure Cosmos DB for MongoDB vCore [!INCLUDE[MongoDB vCore](../../includes/appliesto-mongodb-vcore.md)] -Use Vector Search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications, including apps built using [Azure OpenAI embeddings](../../../cognitive-services/openai/tutorials/embeddings.md), with your data stored in Azure Cosmos DB. Vector search enables you to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore, eliminating the need to transfer your data to more expensive alternatives for vector search capabilities. +Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications, including apps that you built by using [Azure OpenAI embeddings](../../../cognitive-services/openai/tutorials/embeddings.md), with your data that's stored in Azure Cosmos DB. Vector search enables you to efficiently store, index, and query high-dimensional vector data that's stored directly in Azure Cosmos DB for MongoDB vCore. It eliminates the need to transfer your data to more expensive alternatives for vector search capabilities. -## What is Vector search? +## What is vector search? -Vector search is a method that helps you find similar items based on their data characteristics rather than exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you have created using an ML model, or an embeddings API. Examples of embeddings APIs could be [Azure OpenAI Embeddings](/azure/cognitive-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. +Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you created by using a machine learning model by using or an embeddings API. Examples of embeddings APIs are [Azure OpenAI Embeddings](/azure/cognitive-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. -By integrating vector search capabilities natively, you can now unlock the full potential of your data in applications built on top of the OpenAI API. You can also create custom-built solutions that use vector embeddings. +By integrating vector search capabilities natively, you can unlock the full potential of your data in applications that are built on top of the OpenAI API. You can also create custom-built solutions that use vector embeddings. -## Create a vector index +## Use the createIndexes template to create a vector index -To create a vector index, use the following createIndex Spec template: +To create a vector index, use the following `createIndexes` template: ```json { To create a vector index, use the following createIndex Spec template: | Field | Type | Description | | | | |-| `index_name` | `string` | Unique name of the index. | -| `path_to_property` | `string` | Path to the property containing the vector. This path can be a top-level property or a `dot-notation` path to the property. If a `dot-notation` path is used, then all the nonleaf elements can't be arrays. | -| `kind` | `string` | Type of vector index to create. Currently, `vector-ivf` is the only supported index option. | -| `numLists` | `integer` | This integer is the number of clusters the IVF index uses to group the vector data. It's recommended that numLists are set to `rowCount()/1000` for up to a million rows and `sqrt(rowCount)` for more than a million rows. | -| `similarity` | `string` | Similarity metric to use with the IVF index. Possible options are `COS` (cosine distance), `L2` (Euclidean distance) or `IP` (inner product) | -| `dimensions` | `integer` | Number of dimensions for vector similarity. The maximum number of supported dimensions is `2000`. | +| `index_name` | string | Unique name of the index. | +| `path_to_property` | string | Path to the property that contains the vector. This path can be a top-level property or a dot notation path to the property. If a dot notation path is used, then all the nonleaf elements can't be arrays. | +| `kind` | string | Type of vector index to create. Currently, `vector-ivf` is the only supported index option. | +| `numLists` | integer | This integer is the number of clusters that the inverted file (IVF) index uses to group the vector data. We recommend that `numLists` be set to `rowCount()/1000` for up to 1 million rows and to `sqrt(rowCount)` for more than 1 million rows. | +| `similarity` | string | Similarity metric to use with the IVF index. Possible options are `COS` (cosine distance), `L2` (Euclidean distance), and `IP` (inner product). | +| `dimensions` | integer | Number of dimensions for vector similarity. The maximum number of supported dimensions is `2000`. | -In the following examples, we walk through examples on how to index vectors, add documents with vector properties, perform a vector search, and retrieve the index configuration. +## Examples -### Create a vectorIndex +The following examples show you how to index vectors, add documents that have vector properties, perform a vector search, and retrieve the index configuration. ++### Create a vector index ```javascript use test; db.runCommand({ }); ``` -This command creates a `vector-ivf` index against the "vectorContent" property in the documents stored in the specified collection, `exampleCollection`. The `cosmosSearchOptions` property specifies the parameters for the IVF vector index. If your document has the vector stored in a nested property, you can set this property using a dot-notation path. For example, `text.vectorContent` if `vectorContent` is a subproperty of `text`. +This command creates a `vector-ivf` index against the `vectorContent` property in the documents that are stored in the specified collection, `exampleCollection`. The `cosmosSearchOptions` property specifies the parameters for the IVF vector index. If your document has the vector stored in a nested property, you can set this property by using a dot notation path. For example, you might use `text.vectorContent` if `vectorContent` is a subproperty of `text`. -## Adding vectors to your database +### Add vectors to your database -To add vectors to your database's collection, you first need to create the embeddings using your own model, [Azure OpenAI Embeddings](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cognitive-services/openai/tutorials/embeddings.md), or another API (such as [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/)). In this example, new documents are added with sample embeddings: +To add vectors to your database's collection, you first need to create the embeddings by using your own model, [Azure OpenAI Embeddings](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cognitive-services/openai/tutorials/embeddings.md), or another API (such as [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/)). In this example, new documents are added through sample embeddings: ```javascript db.exampleCollection.insertMany([ db.exampleCollection.insertMany([ ]); ``` -### Performing a vector search +### Perform a vector search -To perform a vector search, use the `$search` aggregation pipeline stage in a MongoDB query. To use the `cosmosSearch` index, we have introduced a new `cosmosSearch` operator. +To perform a vector search, use the `$search` aggregation pipeline stage in a MongoDB query. To use the `cosmosSearch` index, use the new `cosmosSearch` operator. ```json { To perform a vector search, use the `$search` aggregation pipeline stage in a Mo } ``` -### Query a vectorIndex using $search +### Query a vector index by using $search -Continuing with the above example, create another vector, `queryVector`. Vector search measures the distance between `queryVector` and the vectors in the `vectorContent` path of your documents. You can set the number of results the search returns by setting the parameter `k`, which is set to `2` here. +Continuing with the last example, create another vector, `queryVector`. Vector search measures the distance between `queryVector` and the vectors in the `vectorContent` path of your documents. You can set the number of results that the search returns by setting the parameter `k`, which is set to `2` here. ```javascript const queryVector = [0.52, 0.28, 0.12]; db.exampleCollection.aggregate([ ]); ``` -In this example, a vector search is performed using `queryVector` as an input via the Mongo shell. The search result is a list of the two most similar items to the query vector, sorted by their similarity scores. +In this example, a vector search is performed by using `queryVector` as an input via the Mongo shell. The search result is a list of two items that are most similar to the query vector, sorted by their similarity scores. ```javascript [ In this example, a vector search is performed using `queryVector` as an input vi ### Get vector index definitions -To retrieve your vector index definition from the collection, use the `listIndexes` command. +To retrieve your vector index definition from the collection, use the `listIndexes` command: ``` javascript db.exampleCollection.getIndexes(); ``` -In this example, the vectorIndex is returned along with all the cosmosSearch parameters used to create the index +In this example, `vectorIndex` is returned with all the `cosmosSearch` parameters that were used to create the index: ```javascript [ In this example, the vectorIndex is returned along with all the cosmosSearch par ## Features and limitations -* Supported distance metrics: L2 (Euclidean), inner product, and cosine. -* Supported indexing methods: IVFFLAT. -* Indexing vectors up to 2,000 dimensions in size. -* Indexing applies to only one vector per document. +- Supported distance metrics: L2 (Euclidean), inner product, and cosine. +- Supported indexing methods: IVFFLAT. +- Indexing vectors up to 2,000 dimensions in size. +- Indexing applies to only one vector per document. ## Next steps -This guide demonstrated how to create a vector index, add documents with vector data, perform a similarity search, and retrieve the index definition. By using vector search, you can efficiently store, index, and query high-dimensional vector data directly in Azure Cosmos DB for MongoDB vCore. Vector search enables you to unlock the full potential of your data with vector embeddings, and empowers you to build more accurate, efficient, and powerful applications. +This guide demonstrates how to create a vector index, add documents that have vector data, perform a similarity search, and retrieve the index definition. By using vector search, you can efficiently store, index, and query high-dimensional vector data directly in Azure Cosmos DB for MongoDB vCore. Vector search enables you to unlock the full potential of your data via vector embeddings, and it empowers you to build more accurate, efficient, and powerful applications. > [!div class="nextstepaction"] > [Introduction to Azure Cosmos DB for MongoDB vCore](introduction.md) |
cosmos-db | Change Feed Design Patterns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-design-patterns.md | Title: Change feed design patterns in Azure Cosmos DB -description: Overview of common change feed design patterns +description: Get an overview of common change feed design patterns. Last updated 05/10/2023 # Change feed design patterns in Azure Cosmos DB+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] -The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed. This document focuses on common change feed design patterns, design tradeoffs, and change feed limitations. +The Azure Cosmos DB change feed enables efficient processing of large datasets that have a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed. This article focuses on common change feed design patterns, design tradeoffs, and change feed limitations. > > [!VIDEO https://aka.ms/docs.change-feed-azure-functions] Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger other actions. Examples of these actions include: -* Triggering a notification or a call to an API when an item is inserted, updated or deleted. -* Real-time stream processing for IoT or real-time analytics processing on operational data. -* Data movement such as synchronizing with a cache, a search engine, a data warehouse, or cold storage. +- Triggering a notification or a call to an API when an item is inserted, updated, or deleted. +- Real-time stream processing for IoT or real-time analytics processing on operational data. +- Data movement such as synchronizing with a cache, a search engine, a data warehouse, or cold storage. The change feed in Azure Cosmos DB enables you to build efficient and scalable solutions for each of these patterns, as shown in the following image: ## Event computing and notifications -The Azure Cosmos DB change feed can simplify scenarios that need to trigger a notification or send a call to an API based on a certain event. You can use the [Change Feed Processor](change-feed-processor.md) to automatically poll your container for changes and call an external API each time there's a write, update or delete. +The Azure Cosmos DB change feed can simplify scenarios that need to trigger a notification or send a call to an API based on a certain event. You can use the [change feed processor](change-feed-processor.md) to automatically poll your container for changes and then call an external API each time there's a write, update, or delete. -You can also selectively trigger a notification or send a call to an API based on specific criteria. For example, if you're reading from the change feed using [Azure Functions](change-feed-functions.md), you can put logic into the function to only send a notification if a condition is met. While the Azure Function code would execute for each change, the notification would only be sent if the condition is met. +You can also selectively trigger a notification or send a call to an API based on specific criteria. For example, if you're reading from the change feed by using [Azure Functions](change-feed-functions.md), you can put logic into the function to send a notification only if a condition is met. Although the Azure Function code would execute for each change, the notification would be sent only if the condition is met. ## Real-time stream processing -The Azure Cosmos DB change feed can be used for real-time stream processing for IoT or real-time analytics processing on operational data. -For example, you might receive and store event data from devices, sensors, infrastructure and applications, and process these events in real time, using [Spark](../../hdinsight/spark/apache-spark-overview.md). The following image shows how you can implement a lambda architecture using the Azure Cosmos DB change feed: +The Azure Cosmos DB change feed can be used for real-time stream processing for IoT or real-time analytics processing on operational data. For example, you might receive and store event data from devices, sensors, infrastructure, and applications, and then process these events in real time by using [Spark](../../hdinsight/spark/apache-spark-overview.md). The following image shows how you can implement a lambda architecture by using the Azure Cosmos DB change feed: In many cases, stream processing implementations first receive a high volume of incoming data into a temporary message queue such as Azure Event Hubs or Apache Kafka. The change feed is a great alternative due to Azure Cosmos DB's ability to support a sustained high rate of data ingestion with guaranteed low read and write latency. The advantages of the Azure Cosmos DB change feed over a message queue include: ### Data persistence -Data written to Azure Cosmos DB shows up in the change feed and is retained until deleted when reading with [latest version mode](change-feed-modes.md#latest-version-change-feed-mode). Message queues typically have a maximum retention period. For example, [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) offers a maximum data retention of 90 days. +Data that's written to Azure Cosmos DB shows up in the change feed. The data is retained in the change feed until it's deleted if you read by using [latest version mode](change-feed-modes.md#latest-version-change-feed-mode). Message queues typically have a maximum retention period. For example, [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) offers a maximum data retention of 90 days. -### Querying ability +### Query ability -In addition to reading from an Azure Cosmos DB container's change feed, you can also run SQL queries on the data stored in Azure Cosmos DB. The change feed isn't a duplication of data already in the container but rather just a different mechanism of reading the data. Therefore, if you read data from the change feed, it's always consistent with queries of the same Azure Cosmos DB container. +In addition to reading from an Azure Cosmos DB container's change feed, you can run SQL queries on the data that's stored in Azure Cosmos DB. The change feed isn't a duplication of data that's already in the container, but rather, it's just a different mechanism of reading the data. Therefore, if you read data from the change feed, the data is always consistent with queries of the same Azure Cosmos DB container. ### High availability -Azure Cosmos DB offers up to 99.999% read and write availability. Unlike many message queues, Azure Cosmos DB data can be easily globally distributed and configured with an [RTO (Recovery Time Objective)](../consistency-levels.md#rto) of zero. +Azure Cosmos DB offers up to 99.999% read and write availability. Unlike many message queues, Azure Cosmos DB data can be easily globally distributed and configured with an [recovery time objective (RTO)](../consistency-levels.md#rto) of zero. -After processing items in the change feed, you can build a materialized view and persist aggregated values back in Azure Cosmos DB. If you're using Azure Cosmos DB to build a game, you can, for example, use change feed to implement real-time leaderboards based on scores from completed games. +After you process items in the change feed, you can build a materialized view and persist aggregated values back in Azure Cosmos DB. If you're using Azure Cosmos DB to build a game, for example, you can use change feed to implement real-time leaderboards based on scores from completed games. ## Data movement You can also read from the change feed for real-time data movement. For example, the change feed helps you perform the following tasks efficiently: -* Update a cache, search index, or data warehouse with data stored in Azure Cosmos DB. +- Update a cache, search index, or data warehouse with data that's stored in Azure Cosmos DB. -* Perform zero down-time migrations to another Azure Cosmos DB account or another Azure Cosmos DB container with a different logical partition key. +- Perform zero-downtime migrations to another Azure Cosmos DB account or to another Azure Cosmos DB container that has a different logical partition key. -* Implement an application-level data tiering and archival. For example, you can store "hot data" in Azure Cosmos DB and age out "cold data" to other storage systems such as [Azure Blob Storage](../../storage/common/storage-introduction.md). +- Implement an application-level data tiering and archival. For example, you can store "hot data" in Azure Cosmos DB and age out "cold data" to other storage systems such as [Azure Blob Storage](../../storage/common/storage-introduction.md). When you have to [denormalize data across partitions and containers](model-partition-example.md#v2-introducing-denormalization-to-optimize-read-queries-), you can read from your container's change feed as a source for this data replication. Real-time data replication with the change feed can only guarantee eventual consistency. You can [monitor how far the Change Feed Processor lags behind](how-to-use-change-feed-estimator.md) in processing changes in your Azure Cosmos DB container. +), you can read from your container's change feed as a source for this data replication. Real-time data replication with the change feed can guarantee only eventual consistency. You can [monitor how far the change feed processor lags behind](how-to-use-change-feed-estimator.md) in processing changes in your Azure Cosmos DB container. ## Event sourcing -The [event sourcing pattern](/azure/architecture/patterns/event-sourcing) involves using an append-only store to record the full series of actions on that data. Azure Cosmos DB's change feed is a great choice as a central data store in event sourcing architectures where all data ingestion is modeled as writes (no updates or deletes). In this case, each write to Azure Cosmos DB is an "event" meaning there's a full record of past events in the change feed. Typical uses of the events published by the central event store are for maintaining materialized views or for integration with external systems. Because there's no time limit for retention in the [change feed latest version mode](change-feed-modes.md#latest-version-change-feed-mode), you can replay all past events by reading from the beginning of your Azure Cosmos DB container's change feed. You can even have [multiple change feed consumers subscribe to the same container's change feed](how-to-create-multiple-cosmos-db-triggers.md#optimizing-containers-for-multiple-triggers). +The [event sourcing pattern](/azure/architecture/patterns/event-sourcing) involves using an append-only store to record the full series of actions on that data. The Azure Cosmos DB change feed is a great choice as a central data store in event sourcing architectures in which all data ingestion is modeled as writes (no updates or deletes). In this case, each write to Azure Cosmos DB is an "event," so there's a full record of past events in the change feed. Typical uses of the events published by the central event store are to maintain materialized views or to integrate with external systems. Because there's no time limit for retention in the [change feed latest version mode](change-feed-modes.md#latest-version-change-feed-mode), you can replay all past events by reading from the beginning of your Azure Cosmos DB container's change feed. You can even have [multiple change feed consumers subscribe to the same container's change feed](how-to-create-multiple-cosmos-db-triggers.md#optimizing-containers-for-multiple-triggers). -Azure Cosmos DB is a great central append-only persistent data store in the event sourcing pattern because of its strengths in horizontal scalability and high availability. In addition, the change Feed Processor library offers an ["at least once"](change-feed-processor.md#error-handling) guarantee, ensuring that you don't miss processing any events. +Azure Cosmos DB is a great central append-only persistent data store in the event sourcing pattern because of its strengths in horizontal scalability and high availability. In addition, the change feed processor offers an ["at least once"](change-feed-processor.md#error-handling) guarantee, ensuring that you don't miss processing any events. ## Current limitations -The change feed has multiple modes that each have important limitations that you should understand. There are several areas to consider when designing an application that uses the change feed in either [latest version mode](change-feed-modes.md#latest-version-change-feed-mode) or [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). +The change feed has multiple modes that each have important limitations that you should understand. There are several areas to consider when you design an application that uses the change feed in either [latest version mode](change-feed-modes.md#latest-version-change-feed-mode) or [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). ### Intermediate updates #### [Latest version mode](#tab/latest-version) -In latest version mode, only the most recent change for a given item is included in the change feed. When processing changes, you read the latest available item version. If there are multiple updates to the same item in a short period of time, it's possible to miss processing intermediate updates. If you would like to replay past individual updates to an item, you can model these updates as a series of writes instead or use all versions and deletes mode. +In latest version mode, only the most recent change for a specific item is included in the change feed. When processing changes, you read the latest available item version. If there are multiple updates to the same item in a short period of time, it's possible to miss processing intermediate updates. If you would like to replay past individual updates to an item, you can model these updates as a series of writes instead or use all versions and deletes mode. #### [All versions and deletes mode (preview)](#tab/all-versions-and-deletes) -All versions and deletes mode provides a full operation log of every item version from all operations. This means no intermediate updates are missed as long as they occurred within the continuous backup retention period configured on the account. +All versions and deletes mode provides a full operation log of every item version from all operations. No intermediate updates are missed if they occurred within the continuous backup retention period that's configured for the account. All versions and deletes mode provides a full operation log of every item versio #### [Latest version mode](#tab/latest-version) -The change feed latest version mode doesn't capture deletes. If you delete an item from your container, it's also removed from the change feed. The most common method of handling deletes is adding a soft marker on the items that are being deleted. You can add a property called "deleted" and set it to "true" at the time of deletion. This document update shows up in the change feed. You can set a TTL on this item so that it can be automatically deleted later. +The change feed latest version mode doesn't capture deletes. If you delete an item from your container, it's also removed from the change feed. The most common method of handling deletes is to add a soft marker on the items that are being deleted. You can add a property called `deleted` and set it to `true` at the time of deletion. This document update shows up in the change feed. You can set a Time to Live (TTL) on this item so that it can be automatically deleted later. #### [All versions and deletes mode (preview)](#tab/all-versions-and-deletes) -Deletes are captured in all versions and deletes mode without needing to set a soft delete marker. You also get metadata indicating if the delete was from a TTL expiration or not. +Deletes are captured in all versions and deletes mode without needing to set a soft delete marker. You also get metadata that indicates whether the delete was from a TTL expiration. Deletes are captured in all versions and deletes mode without needing to set a s #### [Latest version mode](#tab/latest-version) -The change feed in latest version mode has an unlimited retention. As long as an item exists in your container it's available in the change feed. +The change feed in latest version mode has an unlimited retention. As long as an item exists in your container, it's available in the change feed. #### [All versions and deletes mode (preview)](#tab/all-versions-and-deletes) -All versions and deletes mode only allows you to read changes that occurred within the continuous backup retention period configured on the account. This means that if your account is configured with a seven day retention period, you can't read changes from eight days ago. If your application needs to track all updates from the beginning of the container, latest version mode might be a better fit. +All versions and deletes mode allows you to read only changes that occurred within the continuous backup retention period that's configured for the account. If your account is configured with a seven-day retention period, you can't read changes from eight days ago. If your application needs to track all updates from the beginning of the container, latest version mode might be a better fit. ### Guaranteed order -All change feed modes have a guaranteed order within a partition key value but not across partition key values. You should select a partition key that gives you a meaningful order guarantee. +All change feed modes have a guaranteed order within a partition key value, but not across partition key values. You should select a partition key that gives you a guarantee of meaningful order. -For example, consider a retail application using the event sourcing design pattern. In this application, different user actions are each "events", which are modeled as writes to Azure Cosmos DB. Imagine if some example events occurred in the following sequence: +For example, consider a retail application that uses the event sourcing design pattern. In this application, different user actions are each "events," which are modeled as writes to Azure Cosmos DB. Imagine if some example events occurred in the following sequence: -1. Customer adds Item A to their shopping cart -2. Customer adds Item B to their shopping cart -3. Customer removes Item A from their shopping cart -4. Customer checks out and shopping cart contents are shipped +1. Customer adds Item A to their shopping cart. +1. Customer adds Item B to their shopping cart. +1. Customer removes Item A from their shopping cart. +1. Customer checks out and shopping cart contents are shipped. -A materialized view of current shopping cart contents is maintained for each customer. This application must ensure that these events are processed in the order in which they occur. If for example, the cart checkout were to be processed before Item A's removal, it's likely that the customer would have had Item A shipped, as opposed to the desired Item B. In order to guarantee that these four events are processed in order of their occurrence, they should fall within the same partition key value. If you select **username** (each customer has a unique username) as the partition key, you can guarantee that these events show up in the change feed in the same order in which they're written to Azure Cosmos DB. +A materialized view of current shopping cart contents is maintained for each customer. This application must ensure that these events are processed in the order in which they occur. For example, if the cart checkout were to be processed before Item A's removal, it's likely that Item A would have shipped to the customer, and not the item the customer wanted instead, Item B. To guarantee that these four events are processed in order of their occurrence, they should fall within the same partition key value. If you select `username` (each customer has a unique username) as the partition key, you can guarantee that these events show up in the change feed in the same order in which they're written to Azure Cosmos DB. ## Examples Here are some real-world change feed code examples for latest version mode that ## Next steps -* [Change feed overview](../change-feed.md) -* [Change feed modes](change-feed-modes.md) -* [Options to read change feed](read-change-feed.md) -* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) - * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) +- Review the [change feed overview](../change-feed.md). +- Learn more about [change feed modes](change-feed-modes.md). +- Learn your [options to read your change feed](read-change-feed.md). +- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. + - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units by using vCores or vCPUs](../convert-vcore-to-request-unit.md). + - If you know typical request rates for your current database workload, read about [estimating request units by using the Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md). |
cosmos-db | Change Feed Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-modes.md | Title: Change feed modes in Azure Cosmos DB -description: Overview of Azure Cosmos DB change feed modes +description: Get an overview of Azure Cosmos DB change feed modes. -There are two change feed modes in Azure Cosmos DB. Each mode offers the same core functionality with differences including the operations captured in the feed, metadata available for each change, and retention period of changes. You can consume the change feed in different modes across multiple applications for the same Azure Cosmos DB container to fit the requirements of each workload. +Azure Cosmos DB offers two change feed modes. Each mode offers the same core functionality. Differences include the operations that are captured in the feed, the metadata that's available for each change, and the retention period of changes. You can consume the change feed in different modes across multiple applications for the same Azure Cosmos DB container to fit the requirements of each workload. -> [!Note] -> Do you have any feedback about change feed modes? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team: cosmoschangefeed@microsoft.com +> [!NOTE] +> Do you have any feedback about change feed modes? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team: [cosmoschangefeed@microsoft.com](mailto:cosmoschangefeed@microsoft.com). ## Latest version change feed mode -Latest version mode is a persistent record of changes to items from creates and updates. You get the latest version of each item in the container. For example, if an item is created and then updated before you read the change feed, only the updated version appears in the change feed. Deletes aren't captured as changes, and when an item is deleted it's no longer be available in the feed. Latest version change feed mode is enabled by default and is compatible with all Azure Cosmos DB accounts except API for Table and API for PostgreSQL. This mode was previously the default way to consume the change feed. +Latest version mode is a persistent record of changes made to items from creates and updates. You get the latest version of each item in the container. For example, if an item is created and then updated before you read the change feed, only the updated version appears in the change feed. Deletes aren't captured as changes, and when an item is deleted, it's no longer be available in the feed. Latest version change feed mode is enabled by default and is compatible with all Azure Cosmos DB accounts except the API for Table and the API for PostgreSQL. This mode was previously the default way to consume the change feed. ## All versions and deletes change feed mode (preview) -All versions and deletes mode (preview) is a persistent record of all changes to items from create, update, and delete operations. You get a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads. For example, if an item is created and then updated before you read the change feed, both the create and the update versions of the item appear in the change feed. To read from the change feed in all versions and deletes mode, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Turning on continuous backups creates the all versions and deletes change feed. You can only read changes that occurred within the continuous backup period when using this change feed mode. This mode is only compatible with Azure Cosmos DB for NoSQL accounts. Learn more about how to [sign up for the preview](#getting-started). +All versions and deletes mode (preview) is a persistent record of all changes to items from create, update, and delete operations. You get a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads. For example, if an item is created and then updated before you read the change feed, both the create and the update versions of the item appear in the change feed. To read from the change feed in all versions and deletes mode, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Turning on continuous backups creates the all versions and deletes change feed. You can only read changes that occurred within the continuous backup period when using this change feed mode. This mode is only compatible with Azure Cosmos DB for NoSQL accounts. Learn more about how to [sign up for the preview](#get-started). ## Change feed use cases ### [Latest version mode](#tab/latest-version) - Latest version mode provides an easy way to process both real time and historic changes to items in a container with the ability to go back to changes from the beginning of the container. +Latest version mode provides an easy way to process both real-time and historic changes to items in a container with the ability to go back to changes from the beginning of the container. -The following are scenarios well suited to this mode: +The following are scenarios well-suited to this mode: * Migrations of an entire container to a secondary location. * Ability to reprocess changes from the beginning of the container. -* Real time processing of changes to items in a container resulting from create and update operations. +* Real-time processing of changes to items in a container resulting from create and update operations. * Workloads that don't need to capture deletes or intermediate changes between reads. ### [All versions and deletes mode (preview)](#tab/all-versions-and-deletes) -The all versions and deletes change feed mode enables new scenarios for change feed, and simplifies others. You can read every change that occurred to items even in cases where multiple changes occurred between change feed reads, identify the operation type of changes being processed, and get changes resulting from deletes. +The all versions and deletes change feed mode enables new scenarios for change feed, and simplifies others. You can read every change that occurred to items (even in cases in which multiple changes occurred between change feed reads), identify the operation type of changes being processed, and get changes that result from deletes. -A few common scenarios this mode enables and enhances are: +A few common scenarios this mode enables and enhances are: -* Real-time transfer of data between two locations without having to implement a soft delete. +* Real-time transfer of data between two locations without implementing a soft delete. * Triggering logic based on incremental changes if multiple change operations for a given item are expected between change feed polls. -* Triggering alerts on delete operations, for example, auditing scenarios. +* Triggering alerts on delete operations, like in auditing scenarios. * The ability to process item creates, updates, and deletes differently based on operation type. In addition to the [common features across all change feed modes](../change-feed ### [Latest version mode](#tab/latest-version) -* The change feed includes insert and update operations made to items within the container. +* The change feed includes insert and update operations that are made to items in the container. -* This mode of change feed doesn't log deletes. You can capture deletes by setting a "soft-delete" flag within your items instead of deleting them directly. For example, you can add an attribute in the item called "deleted" with the value "true" and then set a TTL on the item. The change feed captures it as an update and the item is automatically deleted when the TTL expires. Alternatively, you can set a finite expiration period for your items with the [TTL capability](time-to-live.md). With this solution, you have to process the changes within a shorter time interval than the TTL expiration period. +* This mode of change feed doesn't log deletes. You can capture deletes by setting a "soft-delete" flag within your items instead of deleting them directly. For example, you can add an attribute in the item called `deleted` with the value `true`, and then set a Time to Live (TTL) on the item. The change feed captures it as an update and the item is automatically deleted when the TTL expires. Alternatively, you can set a finite expiration period for your items by using the [TTL capability](time-to-live.md). With this solution, you have to process the changes within a shorter time interval than the TTL expiration period. -* Only the most recent change for a given item is included in the change feed. Intermediate changes may not be available. +* Only the most recent change for a specific item is included in the change feed. Intermediate changes might not be available. * When an item is deleted, it's no longer available in the change feed. -* Changes can be synchronized from any point-in-time, and there's no fixed data retention period for which changes are available. +* Changes can be synchronized from any point in time, and there's no fixed data retention period for which changes are available. -* You can't filter the change feed for a specific type of operation. One possible alternative, is to add a "soft marker" on the item for updates and filter based on that when processing items in the change feed. +* You can't filter the change feed for a specific type of operation. One possible alternative, is to add a "soft marker" on the item for updates and filter based on the marker when you process items in the change feed. -* The starting point to read change feed can be from the beginning of the container, from a point in time, from "now", or from a specific checkpoint. The precision of the start time is ~5 secs. +* The starting point to read change feed can be from the beginning of the container, from a point in time, from "now," or from a specific checkpoint. The precision of the start time is approximately five seconds. ### [All versions and deletes mode (preview)](#tab/all-versions-and-deletes) -* The change feed includes insert, update and delete operations made to items within the container. Deletes from TTL expirations are also captured. +* The change feed includes insert, update, and delete operations made to items within the container. Deletes from TTL expirations are also captured. -* Metadata is provided to determine the change type, including if a delete was due to a TTL expiration or not. +* Metadata is provided to determine the change type, including whether a delete was due to a TTL expiration. -* Change feed items come in the order of their modification time. Deletes from TTL expirations aren't guaranteed to appear in the feed immediately after the item expires. They appear when the item is purged from the container. +* Change feed items come in the order of their modification time. Deletes from TTL expirations aren't guaranteed to appear in the feed immediately after the item expires. They appear when the item is purged from the container. -* All changes that occurred within the retention window set for continuous backups on the account are able to be read. Attempting to read changes that occurred outside of the retention window results in an error. For example, if your container was created eight days ago and your continuous backup period retention period is seven days, then you can only read changes from the last seven days. +* All changes that occurred within the retention window that's set for continuous backups on the account can be read. Attempting to read changes that occurred outside of the retention window results in an error. For example, if your container was created eight days ago and your continuous backup period retention period is seven days, then you can only read changes from the last seven days. -* The change feed starting point can be from "now" or from a specific checkpoint within your retention period. You can't read changes from the beginning of the container or from a specific point in time using this mode. +* The change feed starting point can be from "now" or from a specific checkpoint within your retention period. You can't read changes from the beginning of the container or from a specific point in time by using this mode. -## Working with the change feed +## Work with the change feed Each mode is compatible with different methods to read the change feed for each language. Each mode is compatible with different methods to read the change feed for each You can use the following ways to consume changes from change feed in latest version mode: -| **Method to read change feed** | **.NET** | **Java** | **Python** | **Node/JS** | -| | | | | | | | +| **Method to read change feed** | **.NET** | **Java** | **Python** | **Node.js** | +| | | | | | | [Change feed pull model](change-feed-pull-model.md) | Yes | Yes | Yes | Yes | | [Change feed processor](change-feed-processor.md) | Yes | Yes | No | No | | [Azure Functions trigger](change-feed-functions.md) | Yes | Yes | Yes | Yes | -### Parsing the response object +### Parse the response object ++In latest version mode, the default response object is an array of items that have changed. Each item contains the standard metadata for any Azure Cosmos DB item, including `_etag` and `_ts`, with the addition of a new property, `_lsn`. -In latest version mode, the default response object is an array of items that have changed. Each item contains the standard metadata for any Azure Cosmos DB item including `_etag` and `_ts` with the addition of a new property `_lsn`. +The `_etag` format is internal and you shouldn't take dependency on it because it can change anytime. `_ts` is a modification or a creation time stamp. You can use `_ts` for chronological comparison. `_lsn` is a batch ID that is added for change feed only that represents the transaction ID. Many items can have same `_lsn`. -The `_etag` format is internal and you shouldn't take dependency on it because it can change anytime. `_ts` is a modification or a creation timestamp. You can use `_ts` for chronological comparison. `_lsn` is a batch ID that is added for change feed only that represents the transaction ID. Many items may have same `_lsn`. ETag on FeedResponse is different from the `_etag` you see on the item. `_etag` is an internal identifier and it's used for concurrency control. The `_etag` property represents the version of the item, whereas the ETag property is used for sequencing the feed. +`ETag` on `FeedResponse` is different from the `_etag` you see on the item. `_etag` is an internal identifier, and it's used for concurrency control. The `_etag` property represents the version of the item, whereas the `ETag` property is used to sequence the feed. ### [All versions and deletes mode (preview)](#tab/all-versions-and-deletes) During the preview, the following methods to read the change feed are available for each client SDK: -| **Method to read change feed** | **.NET** | **Java** | **Python** | **Node/JS** | -| | | | | | | | +| **Method to read change feed** | **.NET** | **Java** | **Python** | **Node.js** | +| | | | | | | [Change feed pull model](change-feed-pull-model.md) | [>= 3.32.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.32.0-preview) | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.37.0) | No | No | | [Change feed processor](change-feed-processor.md) | No | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.42.0) | No | No | | Azure Functions trigger | No | No | No | No | > [!NOTE]-> Regardless of the [connection mode](sdk-connection-modes.md#available-connectivity-modes) configured in your application, all requests made with all versions and deletes change feed will use Gateway mode. +> Regardless of the [connection mode](sdk-connection-modes.md#available-connectivity-modes) that's configured in your application, all requests made with all versions and deletes change feed will use Gateway mode. -### Getting started +### Get started To get started using all versions and deletes change feed mode, enroll in the preview via the [Preview Features page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. Search for the **AllVersionsAndDeletesChangeFeed** feature and select **Register**. :::image type="content" source="media/change-feed-modes/enroll-in-preview.png" alt-text="Screenshot of All versions and deletes change feed mode feature in Preview Features page in Subscriptions overview in Azure portal."::: -Before submitting your request, ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to. +Before you submit your request, ensure that you have at least one Azure Cosmos DB account in the subscription. This account can be an existing account or a new account that you created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, the request is declined because there are no accounts to apply the feature to. -The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview. To use the preview, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Continuous backups can be enabled either before or after being admitted to the preview, but they must be enabled before attempting to read from the change feed in all versions and deletes mode. +The Azure Cosmos DB team reviews your request and contacts you via email to confirm which Azure Cosmos DB accounts in the subscription you want to enroll in the preview. To use the preview, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Continuous backups can be enabled either before or after being admitted to the preview, but continuous backups must be enabled before you attempt to read from the change feed in all versions and deletes mode. -### Parsing the response object +### Parse the response object -The response object is an array of items representing each change with the following shape: +The response object is an array of items that represent each change. The array looks like the following example: ```json [ {ΓÇ» "current": { - <This is the current version of the item that changed. All of the properties of your item will appear here. This will be empty for delete operations.> + <The current version of the item that changed. All the properties of your item will appear here. This will be empty for delete operations.> },ΓÇ» "previous" : { - <This is the previous version of the item that changed. If the change was a delete operation, the item that was deleted will appear here. This will be empty for create and replace operations.> + <The previous version of the item that changed. If the change was a delete operation, the item that was deleted will appear here. This will be empty for create and replace operations.> },ΓÇ» "metadata": {- "lsn": <This is a number representing the batch id. Many items may have the same lsn.>, + "lsn": <A number that represents the batch ID. Many items can have the same lsn.>, "changeType": <The type of change, either 'create', 'replace', or 'delete'.>, - "previousImageLSN" : <This is a number representing the batch id of the change prior to this one.>, - "timeToLiveExpired" : <For delete changes, it will be 'true' if it was deleted due to a TTL expiration or 'false' if not.>, - "crts": <This is a number representing the Conflict Resolved Timestamp. It has the same format as _ts.> + "previousImageLSN" : <A number that represents the batch ID of the change prior to this one.>, + "timeToLiveExpired" : <For delete changes, it will be 'true' if it was deleted due to a TTL expiration and 'false' if it wasn't.>, + "crts": <A number that represents the Conflict Resolved Timestamp. It has the same format as _ts.> } } ] The response object is an array of items representing each change with the follo * Supported for Azure Cosmos DB for NoSQL accounts. Other Azure Cosmos DB account types aren't supported. -* Continuous backups are required to use this change feed mode, the [limitations](../continuous-backup-restore-introduction.md#current-limitations) of which can be found in the documentation. +* Continuous backups are required to use this change feed mode. The [limitations](../continuous-backup-restore-introduction.md#current-limitations) of using continuous backup can be found in the documentation. * Reading changes on a container that existed before continuous backups were enabled on the account isn't supported. -* The ability to start reading the change feed from the beginning or select a start time based on a past timestamp isn't currently supported. You may either start from "now" or from a previous [lease](change-feed-processor.md#components-of-the-change-feed-processor) or [continuation token](change-feed-pull-model.md#saving-continuation-tokens). +* The ability to start reading the change feed from the beginning or to select a start time based on a past time stamp isn't currently supported. You can either start from "now" or from a previous [lease](change-feed-processor.md#components-of-the-change-feed-processor) or [continuation token](change-feed-pull-model.md#save-continuation-tokens). * Receiving the previous version of items that have been updated isn't currently available. -* Accounts using [Private Endpoints](../how-to-configure-private-endpoints.md) aren't supported. +* Accounts that use [private endpoints](../how-to-configure-private-endpoints.md) aren't supported. * Accounts that have enabled [merging partitions](../merge.md) aren't supported. The response object is an array of items representing each change with the follo ## Next steps -You can now proceed to learn more about change feed in the following articles: +Learn more about change feed in the following articles: * [Change feed overview](../change-feed.md) * [Change feed design patterns](./change-feed-design-patterns.md) |
cosmos-db | Change Feed Processor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-processor.md | Title: Change feed processor in Azure Cosmos DB -description: Learn how to use the Azure Cosmos DB Change Feed Processor to read the change feed, the components of the change feed processor +description: Learn how to use the Azure Cosmos DB change feed processor to read the change feed, and learn about the components of the change feed processor. -The main benefit of change feed processor is its fault-tolerant behavior that assures an "at-least-once" delivery of all the events in the change feed. +The main benefit of using the change feed processor is its fault-tolerant design, which assures an "at-least-once" delivery of all the events in the change feed. ## Components of the change feed processor -There are four main components of implementing the change feed processor: +The change feed processor has four main components: ++* **The monitored container**: The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container. -* **The monitored container:** The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container. +* **The lease container**: The lease container acts as state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. -* **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. +* **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it might be represented by a virtual machine (VM), a kubernetes pod, an Azure App Service instance, or an actual physical machine. The compute instance has a unique identifier that's called the *instance name* throughout this article. -* **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, or an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article. +* **The delegate**: The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads. -* **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads. +To further understand how these four elements of the change feed processor work together, let's look at an example in the following diagram. The monitored container stores items and uses 'City' as the partition key. The partition key values are distributed in ranges (each range represents a [physical partition](../partitioning-overview.md#physical-partitions)) that contain items. -To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores items and uses 'City' as the partition key. The partition key values are distributed in ranges (each range representing a [physical partition](../partitioning-overview.md#physical-partitions)) that contain items. -There are two compute instances and the change feed processor is assigning different ranges to each instance to maximize compute distribution, each instance has a unique and different name. -Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container through a *lease* document. The combination of the leases represents the current state of the change feed processor. +The diagram shows two compute instances, and the change feed processor assigns different ranges to each instance to maximize compute distribution. Each instance has a different, unique name. ++Each range is read in parallel. A range's progress is maintained separately from other ranges in the lease container through a *lease* document. The combination of the leases represents the current state of the change feed processor. :::image type="content" source="./media/change-feed-processor/changefeedprocessor.png" alt-text="Change feed processor example" border="false"::: -## Implementing the change feed processor +## Implement the change feed processor ### [.NET](#tab/dotnet) -The change feed processor in .NET is currently only available for [latest version mode](change-feed-modes.md#latest-version-change-feed-mode). The point of entry is always the monitored container, from a `Container` instance you call `GetChangeFeedProcessorBuilder`: +The change feed processor in .NET is currently available only for [latest version mode](change-feed-modes.md#latest-version-change-feed-mode). The point of entry is always the monitored container. In a `Container` instance, you call `GetChangeFeedProcessorBuilder`: [!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=DefineProcessor)] -Where the first parameter is a distinct name that describes the goal of this processor and the second name is the delegate implementation that handles changes. +The first parameter is a distinct name that describes the goal of this processor. The second name is the delegate implementation that handles changes. -An example of a delegate is: +Here's an example of a delegate: [!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=Delegate)] -Afterwards, you define the compute instance name or unique identifier with `WithInstanceName`, which should be unique and different in each compute instance you're deploying, and finally, which is the container to maintain the lease state with `WithLeaseContainer`. +Afterward, you define the compute instance name or unique identifier by using `WithInstanceName`. The compute instance name should be unique and different for each compute instance you're deploying. You set the container to maintain the lease state by using `WithLeaseContainer`. Calling `Build` gives you the processor instance that you can start by calling `StartAsync`. Calling `Build` gives you the processor instance that you can start by calling ` The normal life cycle of a host instance is: 1. Read the change feed.-1. If there are no changes, sleep for a predefined amount of time (customizable with `WithPollInterval` in the Builder) and go to #1. -1. If there are changes, send them to the **delegate**. -1. When the delegate finishes processing the changes **successfully**, update the lease store with the latest processed point in time and go to #1. +1. If there are no changes, sleep for a predefined amount of time (customizable by using `WithPollInterval` in the Builder) and go to #1. +1. If there are changes, send them to the *delegate*. +1. When the delegate finishes processing the changes *successfully*, update the lease store with the latest processed point in time and go to #1. ## Error handling -The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes stops, and a new thread is eventually created. The new thread checks the latest point in time the lease store has saved for that range of partition key values, and restarts from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee. Consuming the change feed in an Eventual consistency level can also result in duplicate events in-between subsequent change feed read operations. For example, the last event of one read operation could appear as the first event of the next operation. +The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread that is processing that particular batch of changes stops, and a new thread is eventually created. The new thread checks the latest point in time that the lease store has saved for that range of partition key values. The new thread restarts from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly, and it's the reason the change feed processor has an "at least once" guarantee. Consuming the change feed in an Eventual consistency level can also result in duplicate events in between subsequent change feed read operations. For example, the last event of one read operation could appear as the first event of the next operation. > [!NOTE]-> There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch. +> In only one scenario, a batch of changes is not retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch. -To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message queue might be another Azure Cosmos DB container. The exact data store doesn't matter, simply that the unprocessed changes are persisted. +To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message queue might be another Azure Cosmos DB container. The exact data store doesn't matter. You simply want the unprocessed changes to be persisted. -In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed or use the [life cycle notifications](#life-cycle-notifications) to detect underlying failures. +You also can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed, or you can use [life cycle notifications](#life-cycle-notifications) to detect underlying failures. -## Life-cycle notifications +## Life cycle notifications -The change feed processor lets you hook to relevant events in its [life cycle](#processing-life-cycle), you can choose to be notified to one or all of them. The recommendation is to at least register the error notification: +You can connect the change feed processor to any relevant event in its [life cycle](#processing-life-cycle). You can choose to be notified to one or all of them. The recommendation is to at least register the error notification: * Register a handler for `WithLeaseAcquireNotification` to be notified when the current host acquires a lease to start processing it. * Register a handler for `WithLeaseReleaseNotification` to be notified when the current host releases a lease and stops processing it.-* Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing, being able to distinguish if the source is the user delegate (unhandled exception) or an error the processor is encountering trying to access the monitored container (for example, networking issues). +* Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing. You need to be able to distinguish whether the source is the user delegate (an unhandled exception) or an error that the processor encounters when it tries to access the monitored container (for example, networking issues). [!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartWithNotifications)] ## Deployment unit -A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances. +A single change feed processor deployment unit consists of one or more compute instances that have the same value for `processorName` and the same lease container configuration, but different instance names. You can have many deployment units in which each unit has a different business flow for the changes and each deployment unit consists of one or more instances. -For example, you might have one deployment unit that triggers an external API anytime there's a change in your container. Another deployment unit might move data, in real time, each time there's a change. When a change happens in your monitored container, all your deployment units get notified. +For example, you might have one deployment unit that triggers an external API each time there's a change in your container. Another deployment unit might move data in real time each time there's a change. When a change happens in your monitored container, all your deployment units are notified. ## Dynamic scaling -As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are: +As mentioned earlier, within a deployment unit, you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are that: * All instances should have the same lease container configuration.-* All instances should have the same `processorName`. +* All instances should have the same value for `processorName`. * Each instance needs to have a different instance name (`WithInstanceName`). -If these three conditions apply, then the change feed processor distributes all the leases in the lease container across all running instances of that deployment unit and parallelizes compute using an equal distribution algorithm. A lease is owned by one instance at a given time, so the number of instances shouldn't be greater than the number of leases. +If these three conditions apply, then the change feed processor distributes all the leases that are in the lease container across all running instances of that deployment unit, and it parallelizes compute by using an equal-distribution algorithm. A lease is owned by one instance at any time, so the number of instances shouldn't be greater than the number of leases. -The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly. +The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing accordingly. -Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances. +Moreover, the change feed processor can dynamically adjust a container's scale if the container's throughput or storage increases. When your container grows, the change feed processor transparently handles the scenario by dynamically increasing the leases and distributing the new leases among existing instances. ## Starting time -By default, when a change feed processor starts the first time, it initializes the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor is initialized for the first time aren't detected. +By default, when a change feed processor starts for the first time, it initializes the leases container and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor is initialized for the first time aren't detected. ### Reading from a previous date and time -It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by passing an instance of a `DateTime` to the `WithStartTime` builder extension: +It's possible to initialize the change feed processor to read changes starting at a *specific date and time* by passing an instance of `DateTime` to the `WithStartTime` builder extension: [!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=TimeInitialization)] -The change feed processor will be initialized for that specific date and time and start reading the changes that happened after. +The change feed processor is initialized for that specific date and time, and it starts to read the changes that happened afterward. ### Reading from the beginning -In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can use `WithStartTime` on the builder extension, but passing `DateTime.MinValue.ToUniversalTime()`, which would generate the UTC representation of the minimum `DateTime` value, like so: +In other scenarios, like in data migrations or if you're analyzing the entire history of a container, you need to read the change feed from *the beginning of that container's lifetime*. You can use `WithStartTime` on the builder extension, but pass `DateTime.MinValue.ToUniversalTime()`, which generates the UTC representation of the minimum `DateTime` value like in this example: [!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartFromBeginningInitialization)] -The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container. +The change feed processor is initialized, and it starts reading changes from the beginning of the lifetime of the container. > [!NOTE]-> These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect. +> These customization options work only to set up the starting point in time of the change feed processor. After the lease container is initialized for the first time, changing these options has no effect. ### [Java](#tab/java) An example of a delegate implementation when reading the change feed in [latest version mode](change-feed-modes.md#latest-version-change-feed-mode) is: - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=Delegate)] +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=Delegate)] ->[!NOTE] -> In the above we pass a variable `options` of type `ChangeFeedProcessorOptions`, which can be used to set various values including `setStartFromBeginning`: +>[!NOTE] +> In this example, you pass a variable `options` of type `ChangeFeedProcessorOptions`, which can be used to set various values, including `setStartFromBeginning`: +> > [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=ChangeFeedProcessorOptions)] -The delegate implementation for reading the change feed in [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview) is similar, but instead of calling `.handleChanges()` you call `.handleAllVersionsAndDeletesChanges()`. All versions and deletes mode is in preview and is available in Java SDK version >= `4.42.0`. An example is: +The delegate implementation for reading the change feed in [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview) is similar, but instead of calling `.handleChanges()`, call `.handleAllVersionsAndDeletesChanges()`. All versions and deletes mode is in preview and is available in Java SDK version >= `4.42.0`. - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessorForAllVersionsAndDeletesMode.java?name=Delegate)] - -In either change feed mode, you can assign this to a `changeFeedProcessorInstance`, passing parameters of compute instance name (`hostName`), the monitored container (here called `feedContainer`) and the `leaseContainer`. We then start the change feed processor: +Here's an example: - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=StartChangeFeedProcessor)] +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessorForAllVersionsAndDeletesMode.java?name=Delegate)] ++In either change feed mode, you can assign it to `changeFeedProcessorInstance` and pass the parameters of compute instance name (`hostName`), the monitored container (here called `feedContainer`), and the `leaseContainer`. Then start the change feed processor: ++[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=StartChangeFeedProcessor)] >[!NOTE]-> The above code snippets are taken from samples in GitHub. You can find the sample for [latest version mode here](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java) or [all versions and deletes mode here](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessorForAllVersionsAndDeletesMode.java). +> The preceding code snippets are taken from samples in GitHub. You can get the sample for [latest version mode](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java) or [all versions and deletes mode](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessorForAllVersionsAndDeletesMode.java). ## Processing life cycle The normal life cycle of a host instance is: 1. Read the change feed.-1. If there are no changes, sleep for a predefined amount of time (customizable with `options.setFeedPollDelay` in the Builder) and go to #1. -1. If there are changes, send them to the **delegate**. -1. When the delegate finishes processing the changes **successfully**, update the lease store with the latest processed point in time and go to #1. +1. If there are no changes, sleep for a predefined amount of time (customizable with `options.setFeedPollDelay` in the builder) and go to #1. +1. If there are changes, send them to the *delegate*. +1. When the delegate finishes processing the changes *successfully*, update the lease store by using the latest processed point in time and go to #1. ## Error handling -The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes is stopped, and a new thread is created. The new thread checks the latest point in time the lease store has saved for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee. Consuming the change feed in an Eventual consistency level can also result in duplicate events in-between subsequent change feed read operations. For example, the last event of one read operation could appear as the first event of the next operation. +The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread that's processing that particular batch of changes is stopped, and a new thread is created. The new thread checks the latest point in time that the lease store has saved for that range of partition key values, and it restart from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly. It's the reason why the change feed processor has an "at least once" guarantee. Consuming the change feed in an Eventual consistency level can also result in duplicate events in between subsequent change feed read operations. For example, the last event of one read operation might appear as the first event of the next operation. > [!NOTE]-> There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch. +> In only one scenario is a batch of changes is not retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch. -To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message might be another Azure Cosmos DB container. The exact data store doesn't matter, simply that the unprocessed changes are persisted. +To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message might be another Azure Cosmos DB container. The exact data store doesn't matter. You simply want the unprocessed changes to be persisted. -In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed. +You also can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed. ## Deployment unit -A single change feed processor deployment unit consists of one or more compute instances with the same lease container configuration, the same `leasePrefix`, but different `hostName` name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances. +A single change feed processor deployment unit consists of one or more compute instances that have the same lease container configuration and the same `leasePrefix`, but different `hostName` values. You can have many deployment units in which each one has a different business flow for the changes, and each deployment unit consists of one or more instances. -For example, you might have one deployment unit that triggers an external API anytime there's a change in your container. Another deployment unit might move data, in real time, each time there's a change. When a change happens in your monitored container, all your deployment units get notified. +For example, you might have one deployment unit that triggers an external API each time there's a change in your container. Another deployment unit might move data in real time each time there's a change. When a change happens in your monitored container, all your deployment units are notified. ## Dynamic scaling -As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are: +As mentioned earlier, within a deployment unit, you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are that: * All instances should have the same lease container configuration. * All instances should have the same value set in `options.setLeasePrefix` (or none set at all). * Each instance needs to have a different `hostName`. -If these three conditions apply, then the change feed processor distributes all the leases in the lease container across all running instances of that deployment unit and parallelizes compute using an equal distribution algorithm. A lease is owned by one instance at a given time, so the number of instances shouldn't be greater than the number of leases. +If these three conditions apply, then the change feed processor distributes all the leases in the lease container across all running instances of that deployment unit, and it parallelizes compute by using an equal-distribution algorithm. A lease is owned by one instance at any time, so the number of instances shouldn't be greater than the number of leases. -The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix`. +The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix` value. -Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances. +Moreover, the change feed processor can dynamically adjust a container's scale if the container's throughput or storage increases. When your container grows, the change feed processor transparently handles the scenario by dynamically increasing the leases and distributing the new leases among existing instances. ## Starting time -By default, when a change feed processor starts the first time, it initializes the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected. +By default, when a change feed processor starts for the first time, it initializes the lease container and starts its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time aren't detected. > [!NOTE]-> Modifying the starting time of the change feed processor is not available when you are using [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). Currently, you must use the default start time. +> Modifying the starting time of the change feed processor isn't available when you use [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). Currently, you must use the default start time. ### Reading from a previous date and time -It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by setting `setStartTime` in `options`. The change feed processor will be initialized for that specific date and time and start reading the changes that happened after. +It's possible to initialize the change feed processor to read changes starting at a *specific date and time* by setting `setStartTime` in `options`. The change feed processor is initialized for that specific date and time, and it starts reading the changes that happened afterward. ### Reading from the beginning -In our sample, we set `setStartFromBeginning` to `false`, which is the same as the default value. In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can set `setStartFromBeginning` to `true`. The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container. +In the sample, `setStartFromBeginning` is set to `false`, which is the same as the default value. In other scenarios, like in data migrations or if you're analyzing the entire history of a container, you need to read the change feed from *the beginning of that container's lifetime*. To do that, you can set `setStartFromBeginning` to `true`. The change feed processor is initialized, and it starts reading changes from the beginning of the lifetime of the container. > [!NOTE]-> These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect. +> These customization options work only to set up the starting point in time of the change feed processor. After the lease container is initialized for the first time, changing them has no effect. ## Change feed and provisioned throughput -Change feed read operations on the monitored container consume [request units](../request-units.md). Make sure your monitored container isn't experiencing [throttling](troubleshoot-request-rate-too-large.md), it adds delays in receiving change feed events on your processors. +Change feed read operations on the monitored container consume [request units](../request-units.md). Make sure that your monitored container isn't experiencing [throttling](troubleshoot-request-rate-too-large.md). Throttling adds delays in receiving change feed events on your processors. -Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption is. Make sure your lease container isn't experiencing [throttling](troubleshoot-request-rate-too-large.md), it adds delays in receiving change feed events and can even stop processing completely. +Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances that use the same lease container, the higher the potential consumption of request units. Make sure that your lease container isn't experiencing [throttling](troubleshoot-request-rate-too-large.md). Throttling adds delays in receiving change feed events. Throttling can even completely end processing. -## Sharing the lease container +## Share the lease container -You can share the lease container across multiple [deployment units](#deployment-unit), each deployment unit would be listening to a different monitored container or have a different `processorName`. With this configuration, each deployment unit would maintain an independent state on the lease container. Review the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput) to make sure the provisioned throughput is enough for all the deployment units. +You can share a lease container across multiple [deployment units](#deployment-unit). In a shared lease container, each deployment unit listens to a different monitored container or has a different value for `processorName`. In this configuration, each deployment unit maintains an independent state on the lease container. Review the [request unit consumption on a lease container](#change-feed-and-provisioned-throughput) to make sure that the provisioned throughput is enough for all the deployment units. ## Advanced lease configuration -There are three key configurations that can affect the change feed processor behavior, in all cases, they'll affect the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput). These configurations can be changed during the creation of the change feed processor but should be used carefully: --* Lease Acquire: By default every 17 seconds. A host will periodically check the state of the lease store and consider acquiring leases as part of the [dynamic scaling](#dynamic-scaling) process. This process is done by executing a Query on the lease container. Reducing this value makes rebalancing and acquiring leases faster but increase [request unit consumption on the lease container](#change-feed-and-provisioned-throughput). -* Lease Expiration: By default 60 seconds. Defines the maximum amount of time that a lease can exist without any renewal activity before it's acquired by another host. When a host crashes, the leases it owned will be picked up by other hosts after this period of time plus the configured renewal interval. Reducing this value will make recovering after a host crash faster, but the expiration value should never be lower than the renewal interval. -* Lease Renewal: By default every 13 seconds. A host owning a lease will periodically renew it even if there are no new changes to consume. This process is done by executing a Replace on the lease. Reducing this value lowers the time required to detect leases lost by host crashing but increase [request unit consumption on the lease container](#change-feed-and-provisioned-throughput). +Three key configurations can affect how the change feed processor works. Each configuration affects the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput). You can set one of these configurations when you create the change feed processor, but use them carefully: +* Lease Acquire: By default, every 17 seconds. A host periodically checks the state of the lease store and consider acquiring leases as part of the [dynamic scaling](#dynamic-scaling) process. This process is done by executing a Query on the lease container. Reducing this value makes rebalancing and acquiring leases faster, but it increases [request unit consumption on the lease container](#change-feed-and-provisioned-throughput). +* Lease Expiration: By default, 60 seconds. Defines the maximum amount of time that a lease can exist without any renewal activity before it's acquired by another host. When a host crashes, the leases it owned are picked up by other hosts after this period of time plus the configured renewal interval. Reducing this value makes recovering after a host crash faster, but the expiration value should never be lower than the renewal interval. +* Lease Renewal: By default, every 13 seconds. A host that owns a lease periodically renews the lease, even if there are no new changes to consume. This process is done by executing a Replace on the lease. Reducing this value lowers the time that's required to detect leases lost by a host crashing, but it increases [request unit consumption on the lease container](#change-feed-and-provisioned-throughput). ## Where to host the change feed processor -The change feed processor can be hosted in any platform that supports long running processes or tasks: +The change feed processor can be hosted in any platform that supports long-running processes or tasks. Here are some examples: -* A continuous running [Azure WebJob](/training/modules/run-web-app-background-task-with-webjobs/). -* A process in an [Azure Virtual Machine](/azure/architecture/best-practices/background-jobs#azure-virtual-machines). -* A background job in [Azure Kubernetes Service](/azure/architecture/best-practices/background-jobs#azure-kubernetes-service). -* A serverless function in [Azure Functions](/azure/architecture/best-practices/background-jobs#azure-functions). -* An [ASP.NET hosted service](/aspnet/core/fundamentals/host/hosted-services). +* A continuous running instance of [WebJobs](/training/modules/run-web-app-background-task-with-webjobs/) in Azure App Service +* A process in an instance of [Azure Virtual Machines](/azure/architecture/best-practices/background-jobs#azure-virtual-machines) +* A background job in [Azure Kubernetes Service](/azure/architecture/best-practices/background-jobs#azure-kubernetes-service) +* A serverless function in [Azure Functions](/azure/architecture/best-practices/background-jobs#azure-functions) +* An [ASP.NET hosted service](/aspnet/core/fundamentals/host/hosted-services) -While change feed processor can run in short lived environments because the lease container maintains the state, the startup cycle of these environments adds delay to receiving the notifications (due to the overhead of starting the processor every time the environment is started). +Although the change feed processor can run in short-lived environments because the lease container maintains the state, the startup cycle of these environments adds delays to the time it takes to receive notifications (due to the overhead of starting the processor every time the environment is started). ## Additional resources While change feed processor can run in short lived environments because the leas ## Next steps -You can now proceed to learn more about change feed processor in the following articles: +Learn more about the change feed processor in the following articles: * [Overview of change feed](../change-feed.md) * [Change feed pull model](change-feed-pull-model.md) * [How to migrate from the change feed processor library](how-to-migrate-from-change-feed-library.md)-* [Using the change feed estimator](how-to-use-change-feed-estimator.md) +* [Use the change feed estimator](how-to-use-change-feed-estimator.md) * [Change feed processor start time](#starting-time) |
cosmos-db | Change Feed Pull Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-pull-model.md | Title: Change feed pull model -description: Learn how to use the Azure Cosmos DB change feed pull model to read the change feed and the differences between the pull model and Change Feed Processor + Title: Change feed pull model in Azure Cosmos DB +description: Learn how to use the Azure Cosmos DB change feed pull model to read the change feed. Understand the differences between the change feed pull model and the change feed processor. -With the change feed pull model, you can consume the Azure Cosmos DB change feed at your own pace. Similar to the [change feed processor](change-feed-processor.md), you can use the change feed pull model to parallelize the processing of changes across multiple change feed consumers. +You can use the change feed pull model to consume the Azure Cosmos DB change feed at your own pace. Similar to the [change feed processor](change-feed-processor.md), you can use the change feed pull model to parallelize the processing of changes across multiple change feed consumers. -## Comparing with change feed processor +## Compare to the change feed processor -Many scenarios can process the change feed using either the [change feed processor](change-feed-processor.md) or the pull model. The pull model's continuation tokens and the change feed processor's lease container are both "bookmarks" for the last processed item, or batch of items, in the change feed. +Many scenarios can process the change feed by using either the [change feed processor](change-feed-processor.md) or the change feed pull model. The pull model's continuation tokens and the change feed processor's lease container both work as bookmarks for the last processed item or batch of items in the change feed. -However, you can't convert continuation tokens to a lease (or vice versa). +However, you can't convert continuation tokens to a lease or vice versa. > [!NOTE]-> In most cases when you need to read from the change feed, the simplest option is to use the [change feed processor](change-feed-processor.md). +> In most cases, when you need to read from the change feed, the simplest option is to use the [change feed processor](change-feed-processor.md). You should consider using the pull model in these scenarios: -- Read changes from a particular partition key-- Control the pace at which your client receives changes for processing-- Perform a one-time read of the existing data in the change feed (for example, to do a data migration)+- To read changes from a specific partition key. +- To control the pace at which your client receives changes for processing. +- To perform a one-time read of the existing data in the change feed (for example, to do a data migration). -Here's some key differences between the change feed processor and pull model: +Here are some key differences between the change feed processor and the change feed pull model: -|Feature | Change feed processor| Pull model | +|Feature | Change feed processor| Change feed pull model | | | | |-| Keeping track of current point in processing change feed | Lease (stored in an Azure Cosmos DB container) | Continuation token (stored in memory or manually persisted) | +| Keeping track of the current point in processing the change feed | Lease (stored in an Azure Cosmos DB container) | Continuation token (stored in memory or manually persisted) | | Ability to replay past changes | Yes, with push model | Yes, with pull model|-| Polling for future changes | Automatically checks for changes based on user-specified `WithPollInterval` | Manual | -| Behavior where there are no new changes | Automatically wait `WithPollInterval` and recheck | Must check status and manually recheck | -| Process changes from entire container | Yes, and automatically parallelized across multiple threads/machines consuming from the same container| Yes, and manually parallelized using FeedRange | -| Process changes from just a single partition key | Not supported | Yes| +| Polling for future changes | Automatically checks for changes based on user-specified `WithPollInterval` value | Manual | +| Behavior where there are no new changes | Automatically wait the value for `WithPollInterval` and then recheck | Must check status and manually recheck | +| Process changes from an entire container | Yes, and automatically parallelized across multiple threads and machines that consume from the same container| Yes, and manually parallelized by using `FeedRange` | +| Process changes from only a single partition key | Not supported | Yes| > [!NOTE]-> Unlike when reading using the change feed processor, you must explicitly handle cases where there are no new changes. +> When you use the pull model, unlike when reading by using the change feed processor, you must explicitly handle cases where there are no new changes. -## Working with the pull model +## Work with the pull model ### [.NET](#tab/dotnet) -To process the change feed using the pull model, create a `FeedIterator`. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that can be read from the change feed using that specific `FeedIterator`. You must also specify a required `ChangeFeedMode` value for the mode in which you want to process changes: [latest version](change-feed-modes.md#latest-version-change-feed-mode) or [all versions and deletes](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). Use either `ChangeFeedMode.LatestVersion` or `ChangeFeedMode.AllVersionsAndDeletes` to indicate which mode you want to read change feed in. When using all versions and deletes mode, you must select a change feed start from value of either `Now()` or from a specific continuation token. +To process the change feed by using the pull model, create an instance of `FeedIterator`. When you initially create `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the value you want to use for `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that can be read from the change feed by using that specific `FeedIterator`. You must also specify a required `ChangeFeedMode` value for the mode in which you want to process changes: [latest version](change-feed-modes.md#latest-version-change-feed-mode) or [all versions and deletes](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). Use either `ChangeFeedMode.LatestVersion` or `ChangeFeedMode.AllVersionsAndDeletes` to indicate which mode you want to use to read the change feed. When you use all versions and deletes mode, you must select a change feed start from value of either `Now()` or from a specific continuation token. -You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. When set, this property sets the maximum number of items received per page. If operations in the monitored collection are performed through stored procedures, transaction scope is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch. +You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. When set, this property sets the maximum number of items received per page. If operations in the monitored collection are performed through stored procedures, transaction scope is preserved when reading items from the change feed. As a result, the number of items received might be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch. -Here's an example for obtaining a `FeedIterator` in latest version mode that returns entity objects, in this case a `User` object: +Here's an example of how to obtain `FeedIterator` in latest version mode that returns entity objects, in this case a `User` object: ```csharp FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.LatestVersion); FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(Ch > [!TIP] > Prior to version `3.34.0`, latest version mode can be used by setting `ChangeFeedMode.Incremental`. Both `Incremental` and `LatestVersion` refer to latest version mode of the change feed and applications that use either mode will see the same behavior. -All versions and deletes mode is in preview and can be used with preview .NET SDK versions >= `3.32.0-preview`. Here's an example for obtaining a `FeedIterator` in all versions and deletes mode that returns dynamic objects: +All versions and deletes mode is in preview and can be used with preview .NET SDK versions >= `3.32.0-preview`. Here's an example for obtaining `FeedIterator` in all versions and deletes mode that returns dynamic objects: ```csharp FeedIterator<dynamic> InteratorWithDynamic = container.GetChangeFeedIterator<dynamic>(ChangeFeedStartFrom.Now(), ChangeFeedMode.AllVersionsAndDeletes); ``` -> [!Note] -> In latest version mode, you will receive objects that represent the item that changed with some [extra metadata](change-feed-modes.md#parsing-the-response-object). All versions and deletes mode returns a different data model, and you can find more information [here](change-feed-modes.md#parsing-the-response-object-1). +> [!NOTE] +> In latest version mode, you receive objects that represent the item that changed, with some [extra metadata](change-feed-modes.md#parse-the-response-object). All versions and deletes mode returns a different data model. For more information, see [Parse the response object](change-feed-modes.md#parse-the-response-object-1). -### Consuming the change feed with streams +### Consume the change feed via streams -The `FeedIterator` for both change feed modes comes in two flavors. In addition to the examples that return entity objects, you can also obtain the response with `Stream` support. Streams allow you to read data without having it first deserialized, saving on client resources. +`FeedIterator` for both change feed modes has two options. In addition to the examples that return entity objects, you can also obtain the response with `Stream` support. Streams allow you to read data without having it first deserialized, so you save on client resources. -Here's an example for obtaining a `FeedIterator` in latest version mode that returns a `Stream`: +Here's an example of how to obtain `FeedIterator` in latest version mode that returns `Stream`: ```csharp FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.LatestVersion); ``` -### Consuming an entire container's changes +### Consume the changes for an entire container -If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes starting at the current time using latest version mode: +If you don't supply a `FeedRange` parameter to `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes, starting at the current time by using latest version mode: ```csharp FeedIterator<User> iteratorForTheEntireContainer = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.LatestVersion); while (iteratorForTheEntireContainer.HasMoreResults) } ``` -Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `HasMoreResults` is always true. When you try to read the change feed and there are no new changes available, you receive a response with `NotModified` status. In the above example, it's handled by waiting 5 seconds before rechecking for changes. +Because the change feed is effectively an infinite list of items that encompass all future writes and updates, the value of `HasMoreResults` is always `true`. When you try to read the change feed and there are no new changes available, you receive a response with `NotModified` status. In the preceding example, it's handled by waiting five seconds before rechecking for changes. -### Consuming a partition key's changes +### Consume the changes for a partition key -In some cases, you may only want to process a specific partition key's changes. You can obtain a `FeedIterator` for a specific partition key and process the changes the same way that you can for an entire container. +In some cases, you might want to process only the changes for a specific partition key. You can obtain `FeedIterator` for a specific partition key and process the changes the same way that you can for an entire container. ```csharp FeedIterator<User> iteratorForPartitionKey = container.GetChangeFeedIterator<User>( while (iteratorForThePartitionKey.HasMoreResults) } ``` -### Using FeedRange for parallelization +### Use FeedRange for parallelization In the [change feed processor](change-feed-processor.md), work is automatically spread across multiple consumers. In the change feed pull model, you can use the `FeedRange` to parallelize the processing of the change feed. A `FeedRange` represents a range of partition key values. -Here's an example showing how to obtain a list of ranges for your container: +Here's an example that shows how to get a list of ranges for your container: ```csharp IReadOnlyList<FeedRange> ranges = await container.GetFeedRangesAsync(); ``` -When you obtain of list of FeedRanges for your container, you'll get one `FeedRange` per [physical partition](../partitioning-overview.md#physical-partitions). +When you get a list of `FeedRange` values for your container, you get one `FeedRange` per [physical partition](../partitioning-overview.md#physical-partitions). -Using a `FeedRange`, you can then create a `FeedIterator` to parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to obtain a `FeedIterator` for the entire container or a single partition key, you can use FeedRanges to obtain multiple FeedIterators, which can process the change feed in parallel. +By using a `FeedRange`, you can create a `FeedIterator` to parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to obtain a `FeedIterator` for the entire container or a single partition key, you can use FeedRanges to obtain multiple FeedIterators, which can process the change feed in parallel. -In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution could be: +In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution might be: -* Using `FeedRange.ToJsonString` and distributing this string value. The consumers can use this value with `FeedRange.FromJsonString`. -* If the distribution is in-process, passing the `FeedRange` object reference. +- Using `FeedRange.ToJsonString` and distributing this string value. The consumers can use this value with `FeedRange.FromJsonString`. +- If the distribution is in-process, passing the `FeedRange` object reference. -Here's a sample that shows how to read from the beginning of the container's change feed using two hypothetical separate machines that are reading in parallel: +Here's a sample that shows how to read from the beginning of the container's change feed by using two hypothetical separate machines that read in parallel: Machine 1: while (iteratorB.HasMoreResults) } ``` -### Saving continuation tokens +### Save continuation tokens -You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes and allows the `FeedIterator` to resume at this point later. The continuation token, if specified, takes precedence over the start time and start from beginning values. The following code reads through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed. +You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes and allows the `FeedIterator` to resume at this point later. The continuation token, if specified, takes precedence over the start time and start from beginning values. The following code reads through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed. ```csharp FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.LatestVersion); while (iterator.HasMoreResults) FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.ContinuationToken(continuation), ChangeFeedMode.LatestVersion); ``` -As long as the Azure Cosmos DB container still exists, a FeedIterator's continuation token never expires. +When you're using latest version mode, the `FeedIterator` continuation token never expires as long as the Azure Cosmos DB container still exists. When you're using all versions and deletes mode, the `FeedIterator` continuation token is valid as long as the changes happened within the retention window for continuous backups. ### [Java](#tab/java) -To process the change feed using the pull model, create a `Iterator<FeedResponse<JsonNode>> responseIterator`. When creating `CosmosChangeFeedRequestOptions`, you must specify where to start reading the change feed from and pass the desired `FeedRange`. The `FeedRange` is a range of partition key values that specifies the items that can be read from the change feed. +To process the change feed by using the pull model, create an instance of `Iterator<FeedResponse<JsonNode>> responseIterator`. When you create `CosmosChangeFeedRequestOptions`, you must specify where to start reading the change feed from and pass the `FeedRange` parameter that you want to use. The `FeedRange` is a range of partition key values that specifies the items that can be read from the change feed. -If you want to read the change feed in [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview), you must also specify `allVersionsAndDeletes()` when creating the `CosmosChangeFeedRequestOptions`. All versions and deletes mode doesn't support processing the change feed from the beginning or from a point in time. You must either process changes from now or from a continuation token. All versions and deletes mode is in preview and is available in Java SDK version >= `4.42.0`. +If you want to read the change feed in [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview), you must also specify `allVersionsAndDeletes()` when you create the `CosmosChangeFeedRequestOptions`. All versions and deletes mode doesn't support processing the change feed from the beginning or from a point in time. You must either process changes from now or from a continuation token. All versions and deletes mode is in preview and is available in Java SDK version >= `4.42.0`. -### Consuming an entire container's changes +### Consume the changes for an entire container -If you specify `FeedRange.forFullRange()`, you can process an entire container's change feed at your own pace. You can optionally specify a value in `byPage()`. When set, this property sets the maximum number of items received per page. +If you specify `FeedRange.forFullRange()`, you can process the change feed for an entire container at your own pace. You can optionally specify a value in `byPage()`. When set, this property sets the maximum number of items received per page. >[!NOTE]-> All of the below code snippets are taken from a samples in GitHub. You can find the latest version mode sample [here](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java) and the all versions and deletes mode sample [here](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModelForAllVersionsAndDeletesMode.java). --Here is an example for obtaining a `responseIterator` in latest version mode: -- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=FeedResponseIterator)] +> All of the following code snippets are taken from a samples in GitHub. You can use the [latest version mode sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java) and the [all versions and deletes mode sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModelForAllVersionsAndDeletesMode.java). -Here is an example for obtaining a `responseIterator` in all versions and deletes mode: +Here's an example of how to obtain a `responseIterator` value in latest version mode: - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModelForAllVersionsAndDeletesMode.java?name=FeedResponseIterator)] +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=FeedResponseIterator)] -We can then iterate over the results. Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `responseIterator.hasNext()` is always true. Here is an example in latest version mode, which reads all changes starting from the beginning. Each iteration persists a continuation token after processing all events, and will pick up from the last processed point in the change feed. This is handled using `createForProcessingFromContinuation`: +Here's an example of how to obtain a `responseIterator` in all versions and deletes mode: - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=AllFeedRanges)] +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModelForAllVersionsAndDeletesMode.java?name=FeedResponseIterator)] +We can then iterate over the results. Because the change feed is effectively an infinite list of items that encompasses all future writes and updates, the value of `responseIterator.hasNext()` is always `true`. Here's an example in latest version mode, which reads all changes, starting from the beginning. Each iteration persists a continuation token after it processes all events. It picks up from the last processed point in the change feed and is handled by using `createForProcessingFromContinuation`: -### Consuming a partition key's changes +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=AllFeedRanges)] -In some cases, you may only want to process a specific partition key's changes. You can process the changes for a specific partition key in the same way that you can for an entire container. Here's an example using latest version mode: +### Consume a partition key's changes - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=PartitionKeyProcessing)] +In some cases, you might want to process only the changes for a specific partition key. You can process the changes for a specific partition key the same way that you can for an entire container. Here's an example that uses latest version mode: +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=PartitionKeyProcessing)] -### Using FeedRange for parallelization +### Use FeedRange for parallelization In the [change feed processor](change-feed-processor.md), work is automatically spread across multiple consumers. In the change feed pull model, you can use the `FeedRange` to parallelize the processing of the change feed. A `FeedRange` represents a range of partition key values. -Here's an example using latest version mode showing how to obtain a list of ranges for your container: +Here's an example that uses latest version mode showing how to obtain a list of ranges for your container: - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=GetFeedRanges)] +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=GetFeedRanges)] When you obtain of list of FeedRanges for your container, you get one `FeedRange` per [physical partition](../partitioning-overview.md#physical-partitions). -Using a `FeedRange`, you can then parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to process changes for the entire container or a single partition key, you can use FeedRanges to process the change feed in parallel. +By using a `FeedRange`, you can parallelize the processing the change feed across multiple machines or threads. Unlike the previous example that showed how to process changes for the entire container or a single partition key, you can use FeedRanges to process the change feed in parallel. -In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution could be: +In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution might be: -* Using `FeedRange.toString()` and distributing this string value. -* If the distribution is in-process, passing the `FeedRange` object reference. +- Using `FeedRange.toString()` and distributing this string value. +- If the distribution is in-process, passing the `FeedRange` object reference. -Here's a sample using latest version mode that shows how to read from the beginning of the container's change feed using two hypothetical separate machines that are reading in parallel: +Here's a sample that uses latest version mode. It shows how to read from the beginning of the container's change feed by using two hypothetical separate machines that read in parallel: Machine 1: - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=Machine1)] +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=Machine1)] Machine 2: - [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=Machine2)] +[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=Machine2)] ## Next steps -* [Overview of change feed](../change-feed.md) -* [Using the change feed processor](change-feed-processor.md) -* [Trigger Azure Functions](change-feed-functions.md) +- [Overview of change feed](../change-feed.md) +- [Using the change feed processor](change-feed-processor.md) +- [Trigger Azure Functions](change-feed-functions.md) |
cosmos-db | How To Migrate From Change Feed Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-migrate-from-change-feed-library.md | The .NET V3 SDK has several breaking changes, the following are the key steps to 1. Customizations that use `WithProcessorOptions` should be updated to use `WithLeaseConfiguration` and `WithPollInterval` for intervals, `WithStartTime` [for start time](./change-feed-processor.md#starting-time), and `WithMaxItems` to define the maximum item count. 1. Set the `processorName` on `GetChangeFeedProcessorBuilder` to match the value configured on `ChangeFeedProcessorOptions.LeasePrefix`, or use `string.Empty` otherwise. 1. The changes are no longer delivered as a `IReadOnlyList<Document>`, instead, it's a `IReadOnlyCollection<T>` where `T` is a type you need to define, there is no base item class anymore.-1. To handle the changes, you no longer need an implementation of `IChangeFeedObserver`, instead you need to [define a delegate](change-feed-processor.md#implementing-the-change-feed-processor). The delegate can be a static Function or, if you need to maintain state across executions, you can create your own class and pass an instance method as delegate. +1. To handle the changes, you no longer need an implementation of `IChangeFeedObserver`, instead you need to [define a delegate](change-feed-processor.md#implement-the-change-feed-processor). The delegate can be a static Function or, if you need to maintain state across executions, you can create your own class and pass an instance method as delegate. For example, if the original code to build the change feed processor looks as follows: |
cosmos-db | Materialized Views | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/materialized-views.md | Title: Materialized views (preview) -description: Efficiently query a base container with predefined filters using Materialized views for Azure Cosmos DB for NoSQL. +description: Learn how to efficiently query a base container by using predefined filters in materialized views for Azure Cosmos DB for NoSQL. Last updated 06/09/2023 [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!IMPORTANT]-> The materialized view feature of Azure Cosmos DB for NoSQL is currently in preview. You can enable this feature using the Azure portal. This preview is provided without a service-level agreement. At this time, materialized views are not recommended for production workloads. Certain features of this preview may not be supported or may have constrained capabilities. For more information, see [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> The materialized view feature of Azure Cosmos DB for NoSQL is currently in preview. You can enable this feature by using the Azure portal. This preview is provided without a service-level agreement. At this time, we don't recommend that you use materialized views for production workloads. Certain features of this preview might not be supported or might have constrained capabilities. For more information, see the [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -Many times applications are required to make queries that don't specify the partition key. In these cases, the queries could scan through all data for a small result set. The queries end up being expensive as they end up inadvertently executing as a cross-partition query. +Applications frequently are required to make queries that don't specify a partition key. In these cases, the queries might scan through all data for a small result set. The queries end up being expensive because they inadvertently run as a cross-partition query. -Materialized views, when defined, help provide a means to efficiently query a base container in Azure Cosmos DB with filters that don't include the partition key. When users write to the base container, the materialized view is built automatically in the background. This view can have a different partition key for efficient lookups. The view also only contains fields explicitly projected from the base container. This view is a read-only table. +Materialized views, when defined, help provide a way to efficiently query a base container in Azure Cosmos DB by using filters that don't include the partition key. When users write to the base container, the materialized view is built automatically in the background. This view can have a different partition key for efficient lookups. The view also contains only fields that are explicitly projected from the base container. This view is a read-only table. With a materialized view, you can: -- Use the view as lookup or mapping container to persist cross-partition scans that would otherwise be expensive queries.+- Use the view as a lookup or mapping container to persist cross-partition scans that would otherwise be expensive queries. - Provide a SQL-based predicate (without conditions) to populate only specific fields.-- Create real-time views that simplify event-based scenarios that are commonly stored as separate containers using change feed triggers.+- Use change feed triggers to create real-time views to simplify event-based scenarios that are commonly stored as separate containers. -Materialized views have many benefits that include, but aren't limited to: +The benefits of using materialized views include, but aren't limited to: -- You can implement server-side denormalization using materialized views. With server-side denormalization, you can avoid multiple independent tables and computationally complex denormalization in client applications.-- Materialized views automatically update views to keep them consistent with the base container. This automatic update abstracts the responsibilities of your client applications that would otherwise typically implement custom logic to perform dual writes to the base container and the view.+- You can implement server-side denormalization by using materialized views. With server-side denormalization, you can avoid multiple independent tables and computationally complex denormalization in client applications. +- Materialized views automatically update views to keep views consistent with the base container. This automatic update abstracts the responsibilities of your client applications that would otherwise typically implement custom logic to perform dual writes to the base container and the view. - Materialized views optimize read performance by reading from a single view. - You can specify throughput for the materialized view independently. - You can configure a materialized view builder layer to map to your requirements to hydrate a view.-- Materialized views improve write performance (when compared to multi-container-write strategy) as write operations only need to be written to the base container.-- Additionally, the Azure Cosmos DB implementation of materialized views is based on a pull model. This implementation doesn't affect write performance.+- Materialized views improve write performance (compared to a multi-container-write strategy) because write operations need to be written only to the base container. +- The Azure Cosmos DB implementation of materialized views is based on a pull model. This implementation doesn't affect write performance. ## Prerequisites Materialized views have many benefits that include, but aren't limited to: ## Enable materialized views -Use the Azure CLI to enable the materialized views feature either with a native command or a REST API operation on your Cosmos DB for NoSQL account. +Use the Azure CLI to enable the materialized views feature either by using a native command or a REST API operation on your Cosmos DB for NoSQL account. ### [Azure portal](#tab/azure-portal) 1. Sign in to the [Azure portal](https://portal.azure.com/). -1. Navigate to your API for NOSQL account. +1. Go to your API for NOSQL account. 1. In the resource menu, select **Settings**. Use the Azure CLI to enable the materialized views feature either with a native ### [Azure CLI](#tab/azure-cli) -1. Sign-in to the Azure CLI. +1. Sign in to the Azure CLI. ```azurecli az login ``` - > [!NOTE] - > For more information on installing the Azure CLI, see [how to install the Azure CLI](/cli/azure/install-azure-cli). + > [!NOTE] + > If you need to first install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli). 1. Define the variables for the resource group and account name for your existing API for NoSQL account. Use the Azure CLI to enable the materialized views feature either with a native # Variable for account name accountName="<account-name>" - # Variable for Subscription + # Variable for Azure subscription subscriptionId="<subscription-id>" ``` -1. Create a new JSON file named **capabilities.json** with the capabilities manifest. +1. Create a new JSON file named *capabilities.json* by using the capabilities manifest. ```json { Use the Azure CLI to enable the materialized views feature either with a native accountId="/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$accountName" ``` -1. Enable the preview materialized views feature for the account using the REST API and [`az rest`](/cli/azure/reference-index#az-rest) with an HTTP `PATCH` verb. +1. Enable the preview materialized views feature for the account by using the REST API and [az rest](/cli/azure/reference-index#az-rest) with an HTTP `PATCH` verb. ```azurecli az rest \ Create a materialized view builder to automatically transform data and write to 1. Sign in to the [Azure portal](https://portal.azure.com/). -1. Navigate to your API for NoSQL account. +1. Go to your API for NoSQL account. 1. In the resource menu, select **Materialized Views Builder**. -1. On the **Materialized Views Builder** page, configure the SKU and number of instances for the builder. +1. On the **Materialized Views Builder** page, configure the SKU and the number of instances for the builder. - > [!NOTE] - > This resource menu option and page will only appear when the Materialized Views feature is enabled for the account. + > [!NOTE] + > This resource menu option and page appear only when the materialized views feature is enabled for the account. 1. Select **Save**. ### [Azure CLI](#tab/azure-cli) -1. Create a new JSON file named **builder.json** with the builder manifest. +1. Create a new JSON file named *builder.json* by using the builder manifest: ```json { Create a materialized view builder to automatically transform data and write to } ``` -1. Enable the materialized views builder for the account using the REST API and `az rest` with an HTTP `PUT` verb. +1. Enable the materialized views builder for the account by using the REST API and `az rest` with an HTTP `PUT` verb: ```azurecli az rest \ Create a materialized view builder to automatically transform data and write to --body @builder.json ``` -1. Wait for a couple of minutes and check the status using `az rest` again with the HTTP `GET` verb. The status in the output should now be `Running`: +1. Wait for a couple of minutes, and then check the status by using `az rest` again with the HTTP `GET` verb. The status in the output should now be `Running`. ```azurecli az rest \ Create a materialized view builder to automatically transform data and write to Azure Cosmos DB For NoSQL uses a materialized view builder compute layer to maintain the views. -You get the flexibility to configure the view builder's compute instances based on your latency and lag requirements to hydrate the views. From a technical stand point, this compute layer helps manage connections between partitions in a more efficient manner even when the data size is large and the number of partitions is high. +You have the flexibility of configuring the view builder's compute instances based on your latency and lag requirements to hydrate the views. From a technical standpoint, this compute layer helps you manage connections between partitions in a more efficient manner, even when the data size is large and the number of partitions is high. -The compute containers are shared among all materialized views within an Azure Cosmos DB account. Each provisioned compute container spawns off multiple tasks that read the change feed from base container partitions and writes data to the target materialized view\[s\]. The compute container transforms the data per the materialized view definition for each materialized view in the account. +The compute containers are shared among all materialized views within an Azure Cosmos DB account. Each provisioned compute container initiates multiple tasks that read the change feed from the base container partitions and write data to the target materialized view or views. The compute container transforms the data per the materialized view definition for each materialized view in the account. ## Create a materialized view -Once your account and Materialized View Builder is set up, you should be able to create Materialized views using REST API. +After your account and the materialized view builder are set up, you should be able to create materialized views by using the REST API. ### [Azure portal / Azure CLI](#tab/azure-portal+azure-cli) -1. Use the Azure portal, Azure SDK, Azure CLI, or REST API to create a source container with `/accountId` as the partition key path. Name this source container **mv-src**. +1. Use the Azure portal, the Azure SDK, the Azure CLI, or the REST API to create a source container that has `/accountId` as the partition key path. Name this source container `mv-src. - > [!NOTE] - > `/accountId` is only used as an example in this article. For your own containers, select a partition key that works for your solution. + > [!NOTE] + > The `/accountId` field is used only as an example in this article. For your own containers, select a partition key that works for your solution. -1. Insert a few items in the source container. To better understand this Getting Started, make sure that the items mandatorily have `accountId`, `fullName`, and `emailAddress` fields. A sample item could look like this: +1. Insert a few items in the source container. To follow the examples that are shown in this article, make sure that the items have `accountId`, `fullName`, and `emailAddress` fields. A sample item might look like this example: ```json { Once your account and Materialized View Builder is set up, you should be able to } ``` - > [!NOTE] - > In this example, you populate the source container with sample data. You can also create a materialized view from an empty source container. + > [!NOTE] + > In this example, you populate the source container with sample data. You can also create a materialized view from an empty source container. -1. Now, create a materialized view named **mv-target** with a partition key path that is different from the source container. For this example, specify `/emailAddress` as the partition key path for the **mv-target** container. +1. Now, create a materialized view named `mv-target` with a partition key path that is different from the source container. For this example, specify `/emailAddress` as the partition key path for the `mv-target` container. - 1. First, we create a definition manifest for a materialized view and save it in a JSON file named **definition.json**. + 1. First, create a definition manifest for a materialized view and save it in a JSON file named *definition.json*: ```json { Once your account and Materialized View Builder is set up, you should be able to } ``` - > [!NOTE] - > In the template, notice that the partitionKey path is set as `/emailAddress`. We also have more parameters to specify the source collection and the definition to populate the materialized view. + > [!NOTE] + > In the template, notice that the partitionKey path is set as `/emailAddress`. We also have more parameters to specify the source collection and the definition to populate the materialized view. -1. Now, make a REST API call to create the materialized view as defined in the **mv_definition.json** file. Use the Azure CLI to make the REST API call. +1. Now, make a REST API call to create the materialized view as defined in the *mv_definition.json* file. Use the Azure CLI to make the REST API call. - 1. Create a variable for the name of the materialized view and source database name. + 1. Create a variable for the name of the materialized view and source database name: ```azurecli materializedViewName="mv-target" Once your account and Materialized View Builder is set up, you should be able to databaseName="<database-that-contains-source-collection>" ``` - 1. Make a REST API call to create the materialized view. + 1. Make a REST API call to create the materialized view: ```azurecli az rest \ Once your account and Materialized View Builder is set up, you should be able to --headers content-type=application/json ``` - 1. Check the status of the materialized view container creation using the REST API: + 1. Check the status of the materialized view container creation by using the REST API: ```azurecli az rest \ Once your account and Materialized View Builder is set up, you should be able to -Once the materialized view is created, the materialized view container automatically syncs changes with the source container. Try executing CRUD operations in the source container and observe the same changes in the mv container. +After the materialized view is created, the materialized view container automatically syncs changes with the source container. Try executing create, read, update, and delete (CRUD) operations in the source container. You'll see the same changes in the materialized view container. > [!NOTE]-> Materialized View containers are read-only container for the end-user so they can only be modified automatically by the Materialized View Builders. +> Materialized view containers are read-only containers for users. The containers can be automatically modified only by a materialized view builder. ## Current limitations -There are a few limitations with the Cosmos DB NoSQL API Materialized View Feature while in preview: +There are a few limitations with the Azure Cosmos DB for NoSQL API materialized view feature while it is in preview: - Materialized views can't be created on a container that existed before support for materialized views was enabled on the account. To use materialized views, create a new container after the feature is enabled. - `WHERE` clauses aren't supported in the materialized view definition.-- You can only project source container items' JSON object property list in the materialized view definition. Now, the list can only contain one level of properties in the JSON tree.+- You can project only the source container item's JSON `object` property list in the materialized view definition. Currently, the list can contain only one level of properties in the JSON tree. - In the materialized view definition, aliases aren't supported for fields of documents.-- It's recommended to create a materialized view when the source container is still empty or has only a few items.-- Restoring a container from a backup doesn't restore materialized views. You need to re-create the materialized views after the restore process is complete.-- All materialized views defined on a specific source container must be deleted before deleting the source container.-- point-in-time restore, hierarchical partitioning, end-to-end encryption isn't supported on source containers, which have materialized views associated with them.+- We recommend that you create a materialized view when the source container is still empty or has only a few items. +- Restoring a container from a backup doesn't restore materialized views. You must re-create the materialized views after the restore process is finished. +- You must delete all materialized views that are defined on a specific source container before you delete the source container. +- Point-in-time restore, hierarchical partitioning, and end-to-end encryption aren't supported on source containers that have materialized views associated with them. - Role-based access control is currently not supported for materialized views.-- Cross-tenant customer-managed-key (CMK) encryption isn't supported on materialized views.-- Currently, this feature can't be enabled along with Partition Merge feature, Analytical Store, or Continuous Backup mode.+- Cross-tenant customer-managed key (CMK) encryption isn't supported on materialized views. +- Currently, this feature can't be enabled if any of the following features are enabled: partition merge, analytical store, or continuous backup. -In addition to the above limitations, consider the following extra limitations: +Note the additional following limitations: - Availability zones- - Materialized views can't be enabled on an account that has availability zone enabled regions. - - Adding a new region with an availability zone isn't supported once `enableMaterializedViews` is set to `true` on the account. + - Materialized views can't be enabled on an account that has availability zone-enabled regions. + - Adding a new region with an availability zone isn't supported after `enableMaterializedViews` is set to `true` on the account. - Periodic backup and restore- - Materialized views aren't automatically restored with the restore process. You'll need to re-create the materialized views after the restore process is complete. Then, you should configure `enableMaterializedViews` on their restored account before creating the materialized views and builders again. + - Materialized views aren't automatically restored by using the restore process. You must re-create the materialized views after the restore process is finished. Then, you should configure `enableMaterializedViews` on the restored account before you create the materialized views and builders again. ## Next steps |
cosmos-db | Migrate Dotnet V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md | The `FeedOptions` class in SDK v2 has now been renamed to `QueryRequestOptions` |`FeedOptions.PopulateQueryMetrics`|Removed. It is now enabled by default and part of the [diagnostics](troubleshoot-dotnet-sdk.md#capture-diagnostics).| |`FeedOptions.RequestContinuation`|Removed. It is now promoted to the query methods themselves. | |`FeedOptions.JsonSerializerSettings`|Removed. Serialization can be customized through a [custom serializer](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializer) or [serializer options](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializeroptions).|-|`FeedOptions.PartitionKeyRangeId`|Removed. Same outcome can be obtained from using [FeedRange](change-feed-pull-model.md#using-feedrange-for-parallelization) as input to the query method.| +|`FeedOptions.PartitionKeyRangeId`|Removed. Same outcome can be obtained from using [FeedRange](change-feed-pull-model.md#use-feedrange-for-parallelization) as input to the query method.| |`FeedOptions.DisableRUPerMinuteUsage`|Removed.| ### Constructing a client Executing change feed queries on the v3 SDK is considered to be using the [chang | .NET v2 SDK | .NET v3 SDK | |-|-|-|`ChangeFeedOptions.PartitionKeyRangeId`|`FeedRange` - In order to achieve parallelism reading the change feed [FeedRanges](change-feed-pull-model.md#using-feedrange-for-parallelization) can be used. It's no longer a required parameter, you can [read the Change Feed for an entire container](change-feed-pull-model.md#consuming-an-entire-containers-changes) easily now.| -|`ChangeFeedOptions.PartitionKey`|`FeedRange.FromPartitionKey` - A FeedRange representing the desired Partition Key can be used to [read the Change Feed for that Partition Key value](change-feed-pull-model.md#consuming-a-partition-keys-changes).| -|`ChangeFeedOptions.RequestContinuation`|`ChangeFeedStartFrom.Continuation` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).| +|`ChangeFeedOptions.PartitionKeyRangeId`|`FeedRange` - In order to achieve parallelism reading the change feed [FeedRanges](change-feed-pull-model.md#use-feedrange-for-parallelization) can be used. It's no longer a required parameter, you can [read the Change Feed for an entire container](change-feed-pull-model.md#consume-the-changes-for-an-entire-container) easily now.| +|`ChangeFeedOptions.PartitionKey`|`FeedRange.FromPartitionKey` - A FeedRange representing the desired Partition Key can be used to [read the Change Feed for that Partition Key value](change-feed-pull-model.md#consume-the-changes-for-a-partition-key).| +|`ChangeFeedOptions.RequestContinuation`|`ChangeFeedStartFrom.Continuation` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#save-continuation-tokens).| |`ChangeFeedOptions.StartTime`|`ChangeFeedStartFrom.Time` | |`ChangeFeedOptions.StartFromBeginning` |`ChangeFeedStartFrom.Beginning` |-|`ChangeFeedOptions.MaxItemCount`|`ChangeFeedRequestOptions.PageSizeHint` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).| -|`IDocumentQuery.HasMoreResults` |`response.StatusCode == HttpStatusCode.NotModified` - The change feed is conceptually infinite, so there could always be more results. When a response contains the `HttpStatusCode.NotModified` status code, it means there are no new changes to read at this time. You can use that to stop and [save the continuation](change-feed-pull-model.md#saving-continuation-tokens) or to temporarily sleep or wait and then call `ReadNextAsync` again to test for new changes. | +|`ChangeFeedOptions.MaxItemCount`|`ChangeFeedRequestOptions.PageSizeHint` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#save-continuation-tokens).| +|`IDocumentQuery.HasMoreResults` |`response.StatusCode == HttpStatusCode.NotModified` - The change feed is conceptually infinite, so there could always be more results. When a response contains the `HttpStatusCode.NotModified` status code, it means there are no new changes to read at this time. You can use that to stop and [save the continuation](change-feed-pull-model.md#save-continuation-tokens) or to temporarily sleep or wait and then call `ReadNextAsync` again to test for new changes. | |Split handling|It's no longer required for users to handle split exceptions when reading the change feed, splits will be handled transparently without the need of user interaction.| ### Using the bulk executor library directly from the V3 SDK |
cosmos-db | Computed Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/computed-properties.md | -Computed properties in Azure Cosmos DB have values derived from existing item properties but aren't persisted in items themselves. These properties are scoped to a single item and can be referenced in queries as if they were persisted properties. Computed properties make it easier to write complex query logic once and reference it many times. You can add a single index on these properties or use them as part of a composite index for increased performance. +Computed properties in Azure Cosmos DB have values that are derived from existing item properties, but the properties aren't persisted in items themselves. Computed properties are scoped to a single item and can be referenced in queries as if they were persisted properties. Computed properties make it easier to write complex query logic once and reference it many times. You can add a single index on these properties or use them as part of a composite index for increased performance. -> [!Note] -> Do you have any feedback about computed properties? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team: cosmoscomputedprops@microsoft.com +> [!NOTE] +> Do you have any feedback about computed properties? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team: [cosmoscomputedprops@microsoft.com](mailto:cosmoscomputedprops@microsoft.com). -## Computed property definition +## What is a computed property? Computed properties must be at the top level in the item and can't have a nested path. Each computed property definition has two components: a name and a query. The name is the computed property name, and the query defines logic to calculate the property value for each item. Computed properties are scoped to an individual item and therefore can't use values from multiple items or rely on other computed properties. Every container can have a maximum of 20 computed properties. Example computed property definition:+ ```json { "computedProperties": [ Example computed property definition: ### Name constraints -It's strongly recommended that computed properties are named in such a way that there's no collision with a persisted property name. To avoid overlapping property names, you can add a prefix or suffix to all computed property names. This article uses the prefix `cp_` in all name definitions. +We strongly recommend that you name computed properties so that there's no collision with a persisted property name. To avoid overlapping property names, you can add a prefix or suffix to all computed property names. This article uses the prefix `cp_` in all name definitions. > [!IMPORTANT]-> Defining a computed property with the same name as a persisted property won't give an error, but it may lead to unexpected behavior. -> Regardless of whether the computed property is indexed, values from persisted properties that share a name with a computed property won't be included in the index. -> Queries will always use the computed property instead of the persisted property, with the exception of the persisted property being returned instead of the computed property if there is a wildcard projection in the SELECT clause. -> This is because wildcard projection does not automatically include computed properties. +> Defining a computed property by using the same name as a persisted property doesn't result in an error, but it might lead to unexpected behavior. Regardless of whether the computed property is indexed, values from persisted properties that share a name with a computed property won't be included in the index. Queries will always use the computed property instead of the persisted property, with the exception of the persisted property being returned instead of the computed property if there is a wildcard projection in the SELECT clause. Wildcard projection does not automatically include computed properties. The constraints on computed property names are: -- All computed properties must have unique names. +- All computed properties must have unique names. -- The value of name property represents the top-level property name that can be used to reference the computed property.+- The value of the `name` property represents the top-level property name that can be used to reference the computed property. -- Reserved system property names such as `id`, `_rid`, `_ts` etc. can't be used as computed property names.+- Reserved system property names such as `id`, `_rid`, and `_ts` can't be used as computed property names. -- A computed property name can't match a property path that is already indexed. This applies to all indexing paths specified including included paths, excluded paths, spatial indexes and composite indexes.+- A computed property name can't match a property path that is already indexed. This constraint applies to all indexing paths that are specified, including included paths, excluded paths, spatial indexes, and composite indexes. ### Query constraints Queries in the computed property definition must be valid syntactically and sema The constraints on computed property query definitions are: -- Queries must specify a FROM clause representing the root item reference. Examples of supported FROM clauses are `FROM c`, `FROM root c` AND `FROM MyContainer c`.+- Queries must specify a FROM clause that represents the root item reference. Examples of supported FROM clauses are `FROM c`, `FROM root c`, and `FROM MyContainer c`. - Queries must use a VALUE clause in the projection. The constraints on computed property query definitions are: - Queries can't include a scalar subquery. -- Aggregate functions, spatial functions, non-deterministic functions and user defined functions aren't supported.+- Aggregate functions, spatial functions, nondeterministic functions, and user defined functions aren't supported. -## Creating computed properties +## Create computed properties -During the preview, computed properties must be created using the .NET v3 or Java v4 SDK. Once the computed properties have been created, you can execute queries that reference them using any method including all SDKs and Data Explorer in the Azure portal. +During the preview, computed properties must be created using the .NET v3 or Java v4 SDK. After the computed properties are created, you can execute queries that reference the properties by using any method, including all SDKs and Azure Data Explorer in the Azure portal. -|**SDK** |**Supported version** |**Notes** | +| SDK | Supported version | Notes | |--|-|-|-|.NET SDK v3 |>= [3.34.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.34.0-preview) |Computed properties are currently only available in preview package versions. | -|Java SDK v4 |>= [4.46.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.46.0) |Computed properties are currently under preview version. | +| .NET SDK v3 | >= [3.34.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.34.0-preview) | Computed properties are currently available only in preview package versions. | +| Java SDK v4 | >= [4.46.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.46.0) | Computed properties are currently under preview version. | -### Create computed properties using the SDK +### Create computed properties by using the SDK -You can either create a new container with computed properties defined, or add them to an existing container. +You can either create a new container that has computed properties defined, or you can add computed properties to an existing container. Here's an example of how to create computed properties in a new container: List<ComputedProperty> computedProperties = new ArrayList<>(List.of(new Computed containerProperties.setComputedProperties(computedProperties); client.getDatabase("myDatabase").createContainer(containerProperties); ```+ Here's an example of how to update computed properties on an existing container: Here's an example of how to update computed properties on an existing container: Query = "SELECT VALUE UPPER(c.name) FROM c" } };- // Update container with changes + // Update the container with changes await container.ReplaceContainerAsync(containerProperties); ``` CosmosContainerProperties containerProperties = container.read().getProperties() Collection<ComputedProperty> modifiedComputedProperites = containerProperties.getComputedProperties(); modifiedComputedProperites.add(new ComputedProperty("cp_upperName", "SELECT VALUE UPPER(c.firstName) FROM c")); containerProperties.setComputedProperties(modifiedComputedProperites);-// Update container with changes +// Update the container with changes container.replace(containerProperties); ```+ > [!TIP]-> Every time you update container properties, the old values are overwritten. -> If you have existing computed properties and want to add new ones, ensure you add both new and existing computed properties to the collection. +> Every time you update container properties, the old values are overwritten. If you have existing computed properties and want to add new ones, be sure that you add both new and existing computed properties to the collection. -## Using computed properties in queries +## Use computed properties in queries -Computed properties can be referenced in queries the same way as persisted properties. Values for computed properties that aren't indexed are evaluated during runtime using the computed property definition. If a computed property is indexed, the index is used in the same way as it is for persisted properties, and the computed property is evaluated on an as needed basis. It's recommended you [add indexes on your computed properties](#indexing-computed-properties) for the best cost and performance. +Computed properties can be referenced in queries the same way that persisted properties are referenced. Values for computed properties that aren't indexed are evaluated during runtime by using the computed property definition. If a computed property is indexed, the index is used the same way that it's used for persisted properties, and the computed property is evaluated on an as-needed basis. We recommend that you [add indexes on your computed properties](#index-computed-properties) for the best cost and performance. -These examples use the quickstart products dataset in [Data Explorer](../../data-explorer.md). Launch the quick start to get started and load the dataset in a new container. +The following examples use the quickstart products dataset that's available in [Data Explorer](../../data-explorer.md) in the Azure portal. To get started, select **Launch the quick start** and load the dataset in a new container. -Here's a sample item: +Here's an example of an item: ```json { Here's a sample item: ### Projection -If computed properties need to be projected, they must be explicitly referenced. Wildcard projections like `SELECT *` return all persisted properties, but don't include computed properties. +If computed properties need to be projected, they must be explicitly referenced. Wildcard projections like `SELECT *` return all persisted properties, but they don't include computed properties. -Let's take an example computed property definition to convert the property `name` to lowercase. +Here's an example computed property definition to convert the `name` property to lowercase: ```json { Let's take an example computed property definition to convert the property `name } ``` -This property could then be projected in a query. +This property could then be projected in a query: ```sql SELECT c.cp_lowerName FROM c SELECT c.cp_lowerName FROM c ### WHERE clause -Computed properties can be referenced in filter predicates like any persisted properties. It's recommended to add any relevant single or composite indexes when using computed properties in filters. +Computed properties can be referenced in filter predicates like any persisted properties. We recommend that you add any relevant single or composite indexes when you use computed properties in filters. -Let's take an example computed property definition to calculate a 20 percent price discount. +Here's an example computed property definition to calculate a 20 percent price discount: ```json { Let's take an example computed property definition to calculate a 20 percent pri } ``` -This property could then be filtered on to ensure that only products where the discount would be less than $50 are returned. +This property could then be filtered on to ensure that only products where the discount would be less than $50 are returned: ```sql SELECT c.price - c.cp_20PercentDiscount as discountedPrice, c.name FROM c WHERE c.cp_20PercentDiscount < 50.00 SELECT c.price - c.cp_20PercentDiscount as discountedPrice, c.name FROM c WHERE As with persisted properties, computed properties can be referenced in the GROUP BY clause and use the index whenever possible. For the best performance, add any relevant single or composite indexes. -Let's take an example computed property definition that finds the primary category for each item from the `categoryName` property. +Here's an example computed property definition that finds the primary category for each item from the `categoryName` property: ```json { Let's take an example computed property definition that finds the primary catego } ``` -You can then group by `cp_primaryCategory` to get the count of items in each primary category. +You can then group by `cp_primaryCategory` to get the count of items in each primary category: ```sql SELECT COUNT(1), c.cp_primaryCategory FROM c GROUP BY c.cp_primaryCategory ``` > [!TIP]-> While you could also achieve this query without using computed properties, using the computed properties greatly simplifies writing the query and allows for increased performance because `cp_primaryCategory` can be indexed. Both [SUBSTRING()](./substring.md) and [INDEX_OF()](./index-of.md) require a [full scan](../../index-overview.md#index-usage) of all items in the container, but if you index the computed property then the entire query can be served from the index instead. The ability to serve the query from the index instead of relying on a full scan increases performance and lowers query RU costs. +> Although you could also achieve this query without using computed properties, using the computed properties greatly simplifies writing the query and allows for increased performance because `cp_primaryCategory` can be indexed. Both [SUBSTRING()](./substring.md) and [INDEX_OF()](./index-of.md) require a [full scan](../../index-overview.md#index-usage) of all items in the container, but if you index the computed property, then the entire query can be served from the index instead. The ability to serve the query from the index instead of relying on a full scan increases performance and lowers query request unit (RU) costs. ### ORDER BY clause -As with persisted properties, computed properties can be referenced in the ORDER BY clause and need to be indexed for the query to succeed. Using computed properties, you can ORDER BY the result of complex logic or system functions, which opens up many new query scenarios using Azure Cosmos DB. +As with persisted properties, computed properties can be referenced in the ORDER BY clause, and they must be indexed for the query to succeed. By using computed properties, you can ORDER BY the result of complex logic or system functions, which opens up many new query scenarios when you use Azure Cosmos DB. -Let's take an example computed property definition that gets the month out of the `_ts` value. +Here's an example computed property definition that gets the month out of the `_ts` value: ```json { Let's take an example computed property definition that gets the month out of th } ``` -Before you can ORDER BY `cp_monthUpdated`, you must add it to your indexing policy. Once your indexing policy is updated, you can order by the computed property. +Before you can ORDER BY `cp_monthUpdated`, you must add it to your indexing policy. After your indexing policy is updated, you can order by the computed property. ```sql SELECT * FROM c ORDER BY c.cp_monthUpdated ``` -## Indexing computed properties +## Index computed properties -Computed properties aren't indexed by default and aren't covered by wildcard paths in the [indexing policy](../../index-policy.md). You can add single or composite indexes on computed properties in the indexing policy the same way you would add indexes on persisted properties. It's recommended to add relevant indexes to all computed properties as they're most beneficial in increasing performance and reducing RUs when they're indexed. When computed properties are indexed, actual values are evaluated during item write operations to generate and persist index terms. +Computed properties aren't indexed by default and aren't covered by wildcard paths in the [indexing policy](../../index-policy.md). You can add single or composite indexes on computed properties in the indexing policy the same way you would add indexes on persisted properties. We recommend that you add relevant indexes to all computed properties because they're most beneficial in increasing performance and reducing RUs when they're indexed. When computed properties are indexed, actual values are evaluated during item write operations to generate and persist index terms. -There are a few considerations for indexing computed properties including: +There are a few considerations for indexing computed properties, including: -- Computed properties can be specified in included paths, excluded paths and composite index paths.+- Computed properties can be specified in included paths, excluded paths, and composite index paths. - Computed properties can't have a spatial index defined on them. There are a few considerations for indexing computed properties including: - If you're removing a computed property that has been indexed, all indexes on that property must also be dropped. > [!NOTE]-> All computed properties are defined at the top level of the item so the path is always `/<computed property name>`. +> All computed properties are defined at the top level of the item. The path is always `/<computed property name>`. ### Add a single index for computed properties -Add a single index for a computed property named `cp_myComputedProperty`. +To add a single index for a computed property named `cp_myComputedProperty`: ```json { Add a single index for a computed property named `cp_myComputedProperty`. ### Add a composite index for computed properties -Add a composite index on two properties where one is computed, `cp_myComputedProperty`, and the other is persisted `myPersistedProperty`. +To add a composite index on two properties in which one is computed as `cp_myComputedProperty`, and the other is persisted as `myPersistedProperty`: ```json { Add a composite index on two properties where one is computed, `cp_myComputedPro } ``` -## RU consumption +## Understand request unit consumption -Adding computed properties to a container does not consume RUs. Write operations on containers that have computed properties defined may see a slight RU increase. If a computed property is indexed, RUs on write operations will increase to reflect the costs for indexing and evaluation of computed property. While in preview, RU charges related to computed properties are subject to change. +Adding computed properties to a container doesn't consume RUs. Write operations on containers that have computed properties defined might have a slight RU increase. If a computed property is indexed, RUs on write operations increase to reflect the costs for indexing and evaluation of the computed property. While in preview, RU charges that are related to computed properties are subject to change. ## Next steps -- [Getting started with queries](./getting-started.md)-- [Managing indexing policies](../how-to-manage-indexing-policy.md)+- [Get started with queries](./getting-started.md) +- [Manage indexing policies](../how-to-manage-indexing-policy.md) |
cosmos-db | Read Change Feed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/read-change-feed.md | Like Azure Functions, developing with the change feed processor library is easy. ## Reading change feed with a pull model -The [change feed pull model](change-feed-pull-model.md) allows you to consume the change feed at your own pace. Changes must be requested by the client and there's no automatic polling for changes. If you want to permanently "bookmark" the last processed change (similar to the push model's lease container), you'll need to [save a continuation token](change-feed-pull-model.md#saving-continuation-tokens). +The [change feed pull model](change-feed-pull-model.md) allows you to consume the change feed at your own pace. Changes must be requested by the client and there's no automatic polling for changes. If you want to permanently "bookmark" the last processed change (similar to the push model's lease container), you'll need to [save a continuation token](change-feed-pull-model.md#save-continuation-tokens). Using the change feed pull model, you get more low level control of the change feed. When reading the change feed with the pull model, you have three options: - Read changes for an entire container-- Read changes for a specific [FeedRange](change-feed-pull-model.md#using-feedrange-for-parallelization)+- Read changes for a specific [FeedRange](change-feed-pull-model.md#use-feedrange-for-parallelization) - Read changes for a specific partition key value You can parallelize the processing of changes across multiple clients, just as you can with the change feed processor. However, the pull model doesn't automatically handle load-balancing across clients. When you use the pull model to parallelize processing of the change feed, you'll first obtain a list of FeedRanges. A FeedRange spans a range of partition key values. You'll need to have an orchestrator process that obtains FeedRanges and distributes them among your machines. You can then use these FeedRanges to have multiple machines read the change feed in parallel. |
cosmos-db | Troubleshoot Changefeed Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-changefeed-functions.md | The concept of a *change* is an operation on an item. The most common scenarios * There is a load balancing of leases across instances. When instances increase or decrease, [load balancing](change-feed-processor.md#dynamic-scaling) can cause the same batch of changes to be delivered to multiple Function instances. This is expected and by design, and should be transient. The [trigger logs](how-to-configure-cosmos-db-trigger.md#enabling-trigger-specific-logs) include the events when an instance acquires and releases leases. -* The item is being updated. The change feed can contain multiple operations for the same item. If the item is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same item is to track the `_lsn` [property for each change](change-feed-modes.md#parsing-the-response-object). If the properties don't match, the changes are different. +* The item is being updated. The change feed can contain multiple operations for the same item. If the item is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same item is to track the `_lsn` [property for each change](change-feed-modes.md#parse-the-response-object). If the properties don't match, the changes are different. * If you're identifying items only by `id`, remember that the unique identifier for an item is the `id` and its partition key. (Two items can have the same `id` but a different partition key). |
cosmos-db | Restore In Account Continuous Backup Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-in-account-continuous-backup-introduction.md | Title: Same account(In-account) restore for continuous backup (preview) + Title: Same-account (in-account) restore for continuous backup (preview) -description: Restore a deleted container or database to a specific point in time in the same Azure Cosmos DB account. +description: An introduction to restoring a deleted container or database to a specific point in time in the same Azure Cosmos DB account. -# Restoring deleted databases/containers in the same account with continuous backup in Azure Cosmos DB (preview) +# Restore a deleted database or container in the same account by using continuous backup (preview) [!INCLUDE[NoSQL, MongoDB, Gremlin, Table](includes/appliesto-nosql-mongodb-gremlin-table.md)] -The same account restore capability of continuous backup in Azure Cosmos DB allows you to restore the deleted databases or containers within the same existing account. You can perform this restore operation using the [Azure portal](how-to-restore-in-account-continuous-backup.md?tabs=azure-portal&pivots=api-nosql), [Azure CLI](how-to-restore-in-account-continuous-backup.md?tabs=azure-cli&pivots=api-nosql), or [Azure PowerShell](how-to-restore-in-account-continuous-backup.md?tabs=azure-powershell&pivots=api-nosql). This feature helps in recovering the data from accidental deletions of databases or containers. +The same-account restore capability of continuous backup in Azure Cosmos DB allows you to restore a deleted database or container in the same existing account. You can perform this restore operation by using the [Azure portal](how-to-restore-in-account-continuous-backup.md?tabs=azure-portal&pivots=api-nosql), the [Azure CLI](how-to-restore-in-account-continuous-backup.md?tabs=azure-cli&pivots=api-nosql), or [Azure PowerShell](how-to-restore-in-account-continuous-backup.md?tabs=azure-powershell&pivots=api-nosql). This feature helps you recover data from accidental deletions of databases or containers. ## What is restored? -You can choose to restore any combination of deleted provisioned throughput containers or shared throughput databases. The specified databases or containers are restored in all the regions present in the account when the restore operation was started. The duration of restoration depends on the amount of data that needs to be restored and the regions where the account is present. Since a parent database must be present before a container can be restored, the database restoration must be done first before restoring the child container. +You can choose to restore any combination of deleted provisioned throughput containers or shared throughput databases. The specified databases or containers are restored in all the regions that are present in the account when the restore operation is started. The duration of restoration depends on the amount of data that needs to be restored and the regions in which the account is present. Because a parent database must be present before a container can be restored, the database restoration must be done before you restore the child container. -For more information on what continuous backup does and doesn't restore, see the [continuous backup introduction](continuous-backup-restore-introduction.md). +For more information on what continuous backup restores and what it doesn't restore, see the [continuous backup introduction](continuous-backup-restore-introduction.md). > [!NOTE]-> When the deleted databases or containers are restored within the same account, those resource should be treated as new resources. Existing session or continuation tokens in use from your client applications will become invalid. It is recommended to refresh the locally stored session and continuations tokens before performing further reads or writes on the newly restored resources. Also, It is recommended to restart SDK clients to automatically refresh session and continuations tokens stored in the SDK cache. +> When the deleted databases or containers are restored within the same account, those resource should be treated as new resources. Existing session or continuation tokens in use from your client applications become invalid. We recommend that you refresh the locally stored session and continuations tokens before you perform further reads or writes on the newly restored resources. Also, we recommend that you restart SDK clients to automatically refresh session and continuations tokens that are stored in the SDK cache. -If your application listens to change feed events on the restored database or containers, it should restart the change feed from the beginning after a restore operation. The restored resource will only have the change feed events starting from the lifetime of the resource after restore. All change feed events before the deletion of the resource aren't propagated to the change feed. Similarly, it's also recommended to restart query operations after a restore operation. Existing query operations may have generated continuation tokens, which now become invalid after a restoration operation. +If your application listens to change feed events on the restored database or containers, it should restart the change feed from the beginning after a restore operation. The restored resource will have only the change feed events starting from the lifetime of the resource after restore. All change feed events before the deletion of the resource aren't propagated to the change feed. We also recommend that you restart query operations after a restore operation. Existing query operations might have generated continuation tokens, which become invalid after a restoration operation. ## Permissions -You can restrict the restore permissions for a continuous backup account to a specific role or principal. For more information on permissions and how to assign them, see [continuous backup and restore permissions](continuous-backup-restore-permissions.md). +You can restrict the restore permissions for a continuous backup account to a specific role or principal. For more information on permissions and how to assign them, see [Continuous backup and restore permissions](continuous-backup-restore-permissions.md). -## Understanding container instance identifiers +## Understand container instance identifiers -When a deleted container gets restored within the same account, the restored container has the same name and resourceId of the original container that was previously deleted. To easily distinguish between the different versions of the container, use the `CollectionInstanceId` field. The `CollectionInstanceId` field differentiates between the different versions of a container. These versions include both the original container that was deleted and the newly restored container. The instance identifier is stored as part of the restore parameters in the restored container's resource definition. The original container, conversely doesn't have restore parameters defined in the resource definition of the container. Each later restored instance of a container has a unique instance identifier. +When a deleted container is restored within the same account, the restored container has the same name and `resourceId` value of the original container that was previously deleted. To easily distinguish between the different versions of the container, use the `CollectionInstanceId` field. The `CollectionInstanceId` field differentiates between the different versions of a container. These versions include both the original container that was deleted and the newly restored container. The instance identifier is stored as part of the restore parameters in the restored container's resource definition. The original container conversely doesn't have restore parameters defined in the resource definition of the container. Each later restored instance of a container has a unique instance identifier. Here's an example: -| | Instance identifier | +| Instance | Instance identifier | | | | | **Original container** | *not defined* | | **First restoration of container** | `11111111-1111-1111-1111-111111111111` | Here's an example: ## In-account restore scenarios -Azure Cosmos DB's point-in-time restore in same account feature helps you to recover from an accidental delete on a database or a container. This feature restores into any region, where backups existed, within the same account. The continuous backup mode allows you to restore to any point of time within the last 30 days or seven days depending on the configured tier. +The Azure Cosmos DB feature to restore to a point in time in the same account helps you recover from an accidental delete on a database or a container. This feature restores to any region in which previous backups existed in the same account. You can use continuous backup mode to restore to any point of time within the last 30 days or 7 days depending on the configured tier. -- Consider an example scenario where the restore operation targets an existing account. In this scenario, you can only perform the restore operation on a specified database or container if the specified resource was available in the current write region as of the restore's source database/container timestamp. The in-account restore feature doesn't allow restoring existing (or not-deleted) databases or container within the same account. To restore live resources, target the restore operation to a new account.+Consider an example scenario in which the restore operation targets an existing account. In this scenario, you can perform the restore operation on a specified database or container if the specified resource was available in the current write region as of the restore's source database or container timestamp. The in-account restore feature doesn't allow restoring existing (or not-deleted) databases or containers within the same account. To restore live resources, target the restore operation to a new account. -Let's consider two more scenarios: +Consider two more scenarios: -- **Scenario 1**: The Azure Cosmos DB account has two regions: **West US** (write region) and **East US** (read region) as of timestamp `T1`. Assume the container (`C1`) is created at timestamp `T1` and got deleted at `T2`. The container, `C1` can be restored within the retention period. Now, consider a situation where the write region of the account is failed over to **East US**. Now, **West US** becomes the read region. Even with this situation, `C1` can be restored within its retention period as long as `C1` was present in the **East US** region as of the restore timestamp specified.+- **Scenario 1**: The Azure Cosmos DB account has two regions: **West US** (write region) and **East US** (read region) as of time stamp `T1`. Assume the container (`C1`) is created at time stamp `T1` and was deleted at `T2`. The container, `C1` can be restored within the retention period. Now, consider a situation in which the write region of the account is failed over to **East US**. Now, **West US** becomes the read region. Even with this situation, `C1` can be restored within its retention period as long as `C1` was present in the **East US** region when the restore time stamp was specified. -- **Scenario 2**: The Azure Cosmos DB account has one region **West US** at timestamp `T3`. Assume `CT2` was created at timestamp `T4` and then deleted at `T5`. The new region **East US** got added at `T6` and then failover was performed to **East US** to make it as new write region as of `T7`. In this scenario, `CT2` can't be restored because `CT2` wasn't present in the **East US** region.+- **Scenario 2**: The Azure Cosmos DB account has one region **West US** at time stamp `T3`. Assume `CT2` was created at time stamp `T4` and then deleted at `T5`. The new region **East US** was added at `T6`, and then failover was performed to **East US** to make it as new write region as of `T7`. In this scenario, `CT2` can't be restored because `CT2` wasn't present in the **East US** region. Here's a list of the current behavior characteristics of the point-in-time in-account restore feature: - Shared-throughput containers are restored with their shared-throughput database. -- Container restore operations require the original database present. This restriction implies that the restoration of the parent database must be performed before trying to restore the deleted child container. +- Container restore operations require the original database to be present. This restriction implies that the restoration of the parent database must be performed before you try to restore a deleted child container. -- Restoration of a resource (database or container) would fail if another restore operation for the same resource is already in progress.+- Restoration of a resource (database or container) fails if another restore operation for the same resource is already in progress. - Creating a container in a shared-throughput database isn't allowed until all restoration options finish. -- Restoration of a container or database is blocked if a delete operation is already in process on either of them.+- Restoration of a container or database is blocked if a delete operation is already in process on either the container or the database. - If an account has more than three different resources, restoration operations can't be run in parallel. -- Restoration of a database or container resource succeeds when the resource is present as of restore time in the current write region of the account. -- Same account restore cannot be performed while any account level operations such as add region, remove region or failover is in progress.+- Restoration of a database or container resource succeeds when the resource is present as of restore time in the current write region of the account. ++- Same-account restore can't be performed while any account-level operations, such as add region, remove region, or fail over, is in progress. ## Next steps -- [Trigger an in-account restore operation](how-to-restore-in-account-continuous-backup.md) for a continuous backup account.-- [Resource model of in-account continuous backup mode](restore-in-account-continuous-backup-resource-model.md)-- [In-account restore introduction](restore-in-account-continuous-backup-introduction.md)+- [Initiate an in-account restore operation](how-to-restore-in-account-continuous-backup.md) for a continuous backup account. +- Learn about the [resource model of in-account continuous backup mode](restore-in-account-continuous-backup-resource-model.md). |
cosmos-db | Serverless Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/serverless-performance.md | Title: Learn more about Azure Cosmos DB Serverless performance -description: Learn more about Azure Cosmos DB Serverless performance. + Title: Performance for the serverless account type ++description: Learn more about performance for the Azure Cosmos DB serverless account type. Last updated 12/01/2022 -# Serverless performance +# Azure Cosmos DB serverless account performance [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] -Azure Cosmos DB Serverless resources have distinct performance characteristics that differ from those provided by provisioned throughput resources. Specifically, serverless containers do not offer any guarantees of predictable throughput or latency. However, the maximum capacity of a serverless container is determined by the data stored within it. We'll explore how this capacity varies with storage in the following section. +Azure Cosmos DB serverless resources have performance characteristics that are different than the characteristics of provisioned throughput resources. Serverless containers don't offer any guarantees of predictable throughput or latency. The maximum capacity of a serverless container is determined by the data that stored within it. The capacity varies by storage size. -## Changes in request units +## Request unit changes -Azure Cosmos DB Serverless offers 5000 RU/s for a container. However, if your workload increases beyond 250 GB or more than five physical partitions, whichever is earlier, then the request units grow linearly with number of underlying physical partitions created in the container. Beyond 5 physical partitions, with every addition of a new physical partition, 1000 RU/s are added to the container's maximum throughput capacity. +An Azure Cosmos DB serverless account offers 5,000 request units per second (RU/s) for a container. But if your workload increases to more than 250 GB or by more than five physical partitions, whichever occurs first, the request units (RUs) grow linearly with the number of underlying physical partitions that were created in the container. With the addition of each new physical partition beyond the original five physical partitions, 1,000 RU/s are added to the container's maximum throughput capacity. -To understand request unit growth with storage, lets look at the table below. +The following table lists RU growth with increased storage size: | Maximum storage | Minimum physical partitions | RU/s per container | RU/s per physical partition -|::|::|::|::| -|<=50 GB | 1 | 5000 | 5000 | -|<=100 GB | 2 | 5000 | 2500 | -|<=150 GB | 3 | 5000 | 1666 | -|<=200 GB | 4 | 5000 | 1250 | -|<=250 GB | 5 | 5000 | 1000 | -|<=300 GB | 6 | 6000 | 1000 | -|<=350 GB | 7 | 7000 | 1000 | -|<=400 GB | 8 | 8000 | 1000 | +|::|::|::|::| +|<=50 GB | 1 | 5,000 | 5,000 | +|<=100 GB | 2 | 5,000 | 2,500 | +|<=150 GB | 3 | 5,000 | 1,666 | +|<=200 GB | 4 | 5,000 | 1,250 | +|<=250 GB | 5 | 5,000 | 1,000 | +|<=300 GB | 6 | 6,000 | 1,000 | +|<=350 GB | 7 | 7,000 | 1,000 | +|<=400 GB | 8 | 8,000 | 1,000 | |.........|...|......|......|-|<= 1 TB | 20 | 20000| 1000 | +|<= 1 TB | 20 | 20,000| 1,000 | -The request units can increase beyond 20000 RU/s for a serverless container if more than 20 partitions are created in the container. It depends on the distribution of logical partition keys in your serverless container. +RUs can increase beyond 20,000 RU/s for a serverless container if more than 20 partitions are created in the container. The RU/s rate depends on the distribution of logical partition keys that are in your serverless container. > [!NOTE]-> These numbers represent the maximum RU/sec capacity available to a serverless container. However, it's important to note that there are no assurances of predictable throughput or latency. If your container requires such guarantees, it's recommended to use provisioned throughput. +> The numbers that are described in this article represent the maximum RU/s capacity that's available to a serverless container. However, it's important to note that if you choose a serverless account type, you have no assurances of predictable throughput or latency. If your container requires these types of guarantees, we recommend that you choose to create a provisioned throughput account type instead of a serverless account. ## Next steps -- Learn more about [serverless](serverless.md)-- Learn more about [request units.](request-units.md)-- Trying to decide between provisioned throughput and serverless? See [choose between provisioned throughput and serverless.](throughput-serverless.md)+- Learn more about the Azure Cosmos DB [serverless](serverless.md) option. +- Learn more about [request units](request-units.md). +- Review how to [choose between provisioned throughput and serverless](throughput-serverless.md). |
cosmos-db | Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/serverless.md | Title: Serverless (consumption-based) + Title: Serverless (consumption-based) account type -description: Learn how to use Azure Cosmos DB in a consumption-based manner with the serverless feature and compare it to provisioned throughput. +description: Learn how to use Azure Cosmos DB based on consumption by choosing the serverless account type. Learn how the serverless model compares to the provisioned throughput model. Last updated 03/16/2023 -# Azure Cosmos DB serverless +# Azure Cosmos DB serverless account type [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] -The Azure Cosmos DB serverless offering lets you use your Azure Cosmos DB account in a consumption-based fashion. With serverless, you're only charged for the Request Units (RUs) consumed by your database operations and the storage consumed by your data. Serverless containers can serve thousands of requests per second with no minimum charge and no capacity planning required. +For an Azure Cosmos DB pricing option that's based on only the resources that you use, choose the Azure Cosmos DB serverless account type. With the serverless option, you're charged only for the request units (RUs) that your database operations consume and for the storage that your data consumes. Serverless containers can serve thousands of requests per second with no minimum charge and no capacity planning required. > [!IMPORTANT] > Do you have any feedback about serverless? We want to hear it! Feel free to drop a message to the Azure Cosmos DB serverless team: [azurecosmosdbserverless@service.microsoft.com](mailto:azurecosmosdbserverless@service.microsoft.com). -Every database operation in Azure Cosmos DB has a cost expressed in [Request Units (RUs)](request-units.md). How you're charged for this cost depends on the type of Azure Cosmos DB account you're using: +Every database operation in Azure Cosmos DB has a cost that's expressed in [RUs](request-units.md). How you're charged for this cost depends on the type of Azure Cosmos DB account you choose: -- In [provisioned throughput](set-throughput.md) mode, you have to commit to a certain amount of throughput (expressed in Request Units per second or RU/s) that is provisioned on your databases and containers. The cost of your database operations is then deducted from the number of Request Units available every second. At the end of your billing period, you get billed for the amount of throughput you've provisioned.-- In serverless mode, you don't have to configure provisioned throughput when creating containers in your Azure Cosmos DB account. At the end of your billing period, you get billed for the number of Request Units your database operations consumed.+- **Provisioned throughput**: In the [provisioned throughput](set-throughput.md) account type, you commit to a certain amount of throughput (expressed in RUs per second or *RU/s*) that is provisioned on your databases and containers. The cost of your database operations is then deducted from the number of RUs that are available every second. For each billing period, you're billed for the amount of throughput that you provisioned. +- **Serverless**: In the serverless account type, you don't have to configure provisioned throughput when you create containers in your Azure Cosmos DB account. For each billing period, you're billed for the number of RUs that your database operations consumed. -## Use-cases +## Use cases -Azure Cosmos DB serverless best fits scenarios where you expect **intermittent and unpredictable traffic** with long idle times. Because provisioning capacity in such situations isn't required and may be cost-prohibitive, Azure Cosmos DB serverless should be considered in the following use-cases: +The Azure Cosmos DB serverless option best fits scenarios in which you expect *intermittent and unpredictable traffic* and long idle times. Because provisioning capacity in these types of scenarios isn't required and might be cost-prohibitive, Azure Cosmos DB serverless should be considered in the following use cases: -- Getting started with Azure Cosmos DB-- Running applications with:- - Bursty, intermittent traffic that is hard to forecast, or - - Low (<10%) average-to-peak traffic ratio -- Developing, testing, prototyping and running in production new applications where the traffic pattern is unknown-- Integrating with serverless compute services like [Azure Functions](../azure-functions/functions-overview.md)+- You're getting started with Azure Cosmos DB. +- You're running applications that have one of the following patterns: + - Bursting, intermittent traffic that is hard to forecast. + - Low (less than 10 percent) average-to-peak traffic ratio. +- You're developing, testing, prototyping, or offering your users a new application, and you don't yet know the traffic pattern. +- You're integrating with a serverless compute service, like [Azure Functions](../azure-functions/functions-overview.md). -For more information, see [choosing between provisioned throughput and serverless](throughput-serverless.md). +For more information, see [Choose between provisioned throughput and serverless](throughput-serverless.md). -## Using serverless resources +## Use serverless resources -Serverless is a new Azure Cosmos DB account type, which means that you have to choose between **provisioned throughput** and **serverless** when creating a new account. You must create a new serverless account to get started with serverless. Migrating existing accounts to/from serverless mode isn't currently supported. +Azure Cosmos DB serverless is a new account type in Azure Cosmos DB. When you create an Azure Cosmos DB account, you choose between *provisioned throughput* and *serverless* options. -Any container that is created in a serverless account is a serverless container. Serverless containers expose the same capabilities as containers created in provisioned throughput mode, so you read, write and query your data the exact same way. However serverless accounts and containers also have specific characteristics: +To get started with using the serverless model, you must create a new serverless account. Migrating an existing account to or from the serverless model currently isn't supported. -- A serverless account can only run in a single Azure region. It isn't possible to add more Azure regions to a serverless account after you create it.-- Provisioning throughput isn't required on serverless containers, so the following statements are applicable:- - You can't pass any throughput when creating a serverless container and doing so returns an error. - - You can't read or update the throughput on a serverless container and doing so returns an error. - - You can't create a shared throughput database in a serverless account and doing so returns an error. -- Serverless container can store a maximum of 1 TB of data and indexes.-- Serverless container offers a maximum throughput ranging from 5000 RU/s to 20,000 RU/s, depending on the number of available partitions. In the ideal scenario, a 1 TB data set would require 20,000 RU/s, but the available throughput can exceed this. For further details, please refer to the documentation on [Serverless Performance](serverless-performance.md).+Any container that's created in a serverless account is a serverless container. Serverless containers have the same capabilities as containers that are created in a provisioned throughput account type. You read, write, and query your data exactly the same way. But a serverless account and a serverless container also have other specific characteristics: -## Monitoring your consumption +- A serverless account can run only in a single Azure region. It isn't possible to add more Azure regions to a serverless account after you create the account. +- Provisioning throughput isn't required on a serverless container, so the following statements apply: + - You can't pass any throughput when you create a serverless container or an error is returned. + - You can't read or update the throughput on a serverless container or an error is returned. + - You can't create a shared throughput database in a serverless account or an error is returned. +- A serverless container can store a maximum of 1 TB of data and indexes. +- A serverless container offers a maximum throughput that ranges from 5,000 RU/s to 20,000 RU/s. The maximum throughput depends on the number of partitions that are available in the container. In the ideal scenario, a 1-TB dataset would require 20,000 RU/s, but the available throughput can exceed this amount. For more information, see [Azure Cosmos DB serverless performance](serverless-performance.md). -If you have used Azure Cosmos DB in provisioned throughput mode before, you find serverless is more cost-effective when your traffic doesn't justify provisioned capacity. The trade-off is that your costs become less predictable because you're billed based on the number of requests your database has processed. Because of the lack of predictability, it's important to keep an eye on your current consumption. +## Monitor your consumption -When browsing the **Metrics** pane of your account, you find a chart named **Request Units consumed** under the **Overview** tab. This chart shows how many Request Units your account has consumed: +If you've used the Azure Cosmos DB provisioned throughput model before, you might find that the serverless model is more cost-effective when your traffic doesn't justify provisioned capacity. The tradeoff is that your costs become less predictable because you're billed based on the number of requests that your database processes. Because of the lack of predictability when you use the serverless option, it's important to monitor your current consumption. +You can monitor consumption by viewing a chart in your Azure Cosmos DB account in the Azure portal. For your Azure Cosmos DB account, go to the **Metrics** pane. On the **Overview** tab, view the chart that's named **Request Units consumed**. The chart shows how many RUs your account has consumed for different periods of time. -You can find the same chart when using Azure Monitor, as described [here](monitor-request-unit-usage.md). Azure Monitor enables the ability to configure [alerts](../azure-monitor/alerts/alerts-metric-overview.md), which can be used to notify you when your Request Unit consumption has passed a certain threshold. ++You can use the same [chart in Azure Monitor](monitor-request-unit-usage.md). When you use Azure Monitor, you can set up [alerts](../azure-monitor/alerts/alerts-metric-overview.md) so that you're notified when your RU consumption passes a threshold that you set. ## Next steps -Get started with serverless with the following articles: +To get started with using the serverless pricing option in Azure Cosmos DB, review the following articles: -- [Azure Cosmos DB Serverless performance](serverless-performance.md)+- [Azure Cosmos DB serverless performance](serverless-performance.md) - [Choose between provisioned throughput and serverless](throughput-serverless.md) - [Pricing model in Azure Cosmos DB](how-pricing-works.md) |
data-factory | Format Delta | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md | The below table lists the properties supported by a delta sink. You can edit the | Compression type | The compression type of the delta table | no | `bzip2`<br>`gzip`<br>`deflate`<br>`ZipDeflate`<br>`snappy`<br>`lz4` | compressionType | | Compression level | Choose whether the compression completes as quickly as possible or if the resulting file should be optimally compressed. | required if `compressedType` is specified. | `Optimal` or `Fastest` | compressionLevel | | Vacuum | Deletes files older than the specified duration that is no longer relevant to the current table version. When a value of 0 or less is specified, the vacuum operation isn't performed. | yes | Integer | vacuum |-| Table action | Tells ADF what to do with the target Delta table in your sink. You can leave it as-is and append new rows, overwrite the existing table definition and data with new metadata and data, or keep the existing table structure but first truncate all rows, then insert the new rows. | no | None, Truncate, Overwrite | truncate, overwrite | +| Table action | Tells ADF what to do with the target Delta table in your sink. You can leave it as-is and append new rows, overwrite the existing table definition and data with new metadata and data, or keep the existing table structure but first truncate all rows, then insert the new rows. | no | None, Truncate, Overwrite | deltaTruncate, overwrite | | Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable | | Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true | | Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true | |
defender-for-cloud | Monitoring Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md | Learn more about [using the Azure Monitor Agent with Defender for Cloud](auto-de ||:--|:--| | Release state: | Generally available (GA) | Generally available (GA) | | Relevant Defender plan: | [Foundational Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md#defender-cspm-plan-options) for agent-based security recommendations<br>[Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | [Foundational Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md#defender-cspm-plan-options) for agent-based security recommendations<br>[Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) |-| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Owner](../role-based-access-control/built-in-roles.md#owner) | +| Required roles and permissions (subscription-level): |[Owner](/azure/role-based-access-control/built-in-roles)| [Owner](../role-based-access-control/built-in-roles.md#owner) | | Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines | | Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet | Learn more about: - [Setting up email notifications](configure-email-notifications.md) for security alerts - Protecting workloads with [the Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads)++ |
defender-for-cloud | Sql Azure Vulnerability Assessment Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-enable.md | You can enable vulnerability assessment in two ways: 1. Under the Security heading, select **Defender for Cloud**. 1. Enable the express configuration of vulnerability assessment: - > [!IMPORTANT] - > Baselines and scan history are not migrated. - - **If vulnerability assessment is not configured**, select **Enable** in the notice that prompts you to enable the vulnerability assessment express configuration, and confirm the change. :::image type="content" source="media/sql-azure-vulnerability-assessment-enable/enable-express-vulnerability-assessment.png" alt-text="Screenshot of notice to enable the express vulnerability assessment configuration in the Defender for Cloud settings for a SQL server."::: You can enable vulnerability assessment in two ways: Select **Enable** to use the vulnerability assessment express configuration. - **If vulnerability assessment is already configured**, select **Enable** in the notice that prompts you to switch to express configuration, and confirm the change.+ > [!IMPORTANT] + > Baselines and scan history are not migrated. :::image type="content" source="media/sql-azure-vulnerability-assessment-enable/migrate-to-express-vulnerability-assessment.png" alt-text="Screenshot of notice to migrate from the classic to the express vulnerability assessment configuration in the Defender for Cloud settings for a SQL server."::: |
defender-for-cloud | Tutorial Enable Servers Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-servers-plan.md | Defender for Servers in Microsoft Defender for Cloud brings threat detection and Microsoft Defender for Servers includes an automatic, native integration with Microsoft Defender for Endpoint. Learn more, [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). With this integration enabled, you have access to the vulnerability findings from **Microsoft threat and vulnerability management**. -Defender for Servers offers two plan options that offer different levels of protection and their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +Defender for Servers offers two plan options with different levels of protection and their own cost. You can learn more about Defender for Cloud's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ## Prerequisites Defender for Servers offers two plan options that offer different levels of prot ## Enable the Defender for Servers plan -You can enable the Defender for Servers plan on your Azure subscription, AWS account or GCP project to protect all of your resources within that subscription on the Environment settings page. +You can enable the Defender for Servers plan from the Environment settings page to protect all the machines in an Azure subscription, AWS account, or GCP project. -**To Enable the Defender for Servers plan**: +**To enable the Defender for Servers plan**: 1. Sign in to the [Azure portal](https://portal.azure.com). You can enable the Defender for Servers plan on your Azure subscription, AWS acc ## Select a Defender for Servers plan -When you enable the Defender for Servers plan, you're then given the option to select which plan you want to enable. There are two plans you can choose from that offer different levels of protections for your resources. +When you enable the Defender for Servers plan, you're then given the option to select which plan - Plan 1 or Plan 2 - to enable. There are two plans you can choose from that offer different levels of protections for your resources. -You can compare what's included in [each plan](plan-defender-for-servers-select-plan.md#plan-features). +[Review what's included each plan](plan-defender-for-servers-select-plan.md#plan-features). **To select a Defender for Servers plan**: You can compare what's included in [each plan](plan-defender-for-servers-select- 1. In the Defender for Cloud menu, select **Environment settings**. -1. Select the relevant Azure subscription, AWS account or GCP project. +1. Select the relevant Azure subscription, AWS account, or GCP project. 1. Select **Change plans**. |
defender-for-iot | Cli Ot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md | Supported attributes for the *cyberx* user are defined as follows: |Attribute |Description | ||| |`-h`, `--help` | Shows the help message and exits. |-|`-i <INCLUDE>`, `--include <INCLUDE>` | The path to a file that contains the devices and subnet masks you want to include, where `<INCLUDE>` is the path to the file. | -|`-x EXCLUDE`, `--exclude EXCLUDE` | The path to a file that contains the devices and subnet masks you want to exclude, where `<EXCLUDE>` is the path to the file. | +|`-i <INCLUDE>`, `--include <INCLUDE>` | The path to a file that contains the devices and subnet masks you want to include, where `<INCLUDE>` is the path to the file. For example, see [Sample include or exclude file](#txt). | +|`-x EXCLUDE`, `--exclude EXCLUDE` | The path to a file that contains the devices and subnet masks you want to exclude, where `<EXCLUDE>` is the path to the file. For example, see [Sample include or exclude file](#txt). | |- `-etp <EXCLUDE_TCP_PORT>`, `--exclude-tcp-port <EXCLUDE_TCP_PORT>` | Excludes TCP traffic on any specified ports, where the `<EXCLUDE_TCP_PORT>` defines the port or ports you want to exclude. Delimitate multiple ports by commas, with no spaces. | |`-eup <EXCLUDE_UDP_PORT>`, `--exclude-udp-port <EXCLUDE_UDP_PORT>` | Excludes UDP traffic on any specified ports, where the `<EXCLUDE_UDP_PORT>` defines the port or ports you want to exclude. Delimitate multiple ports by commas, with no spaces. | |`-itp <INCLUDE_TCP_PORT>`, `--include-tcp-port <INCLUDE_TCP_PORT>` | Includes TCP traffic on any specified ports, where the `<INCLUDE_TCP_PORT>` defines the port or ports you want to include. Delimitate multiple ports by commas, with no spaces. | Supported attributes for the *cyberx* user are defined as follows: |`-p <PROGRAM>`, `--program <PROGRAM>` | Defines the component for which you want to configure a capture filter. Use `all` for basic use cases, to create a single capture filter for all components. <br><br>For advanced use cases, create separate capture filters for each component. For more information, see [Create an advanced filter for specific components](#create-an-advanced-filter-for-specific-components).| |`-m <MODE>`, `--mode <MODE>` | Defines an include list mode, and is relevant only when an include list is used. Use one of the following values: <br><br>- `internal`: Includes all communication between the specified source and destination <br>- `all-connected`: Includes all communication between either of the specified endpoints and external endpoints. <br><br>For example, for endpoints A and B, if you use the `internal` mode, included traffic will only include communications between endpoints **A** and **B**. <br>However, if you use the `all-connected` mode, included traffic will include all communications between A *or* B and other, external endpoints. | +<a name="txt"></a>**Sample include or exclude file** ++For example, an include or exclude **.txt** file might include the following entries: ++```txt +192.168.50.10 +172.20.248.1 +``` + #### Create a basic capture filter using the support user If you're creating a basic capture filter as the *support* user, no attributes are passed in the [original command](#create-a-basic-filter-for-all-components). Instead, a series of prompts is displayed to help you create the capture filter interactively. |
lab-services | How To Enable Nested Virtualization Template Vm Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-using-script.md | Perform the following steps to verify your nested VM configuration: Learn more about the [supported guest operating systems in Hyper-V](/virtualization/hyper-v-on-windows/about/supported-guest-os). -- There's a known issue with caching of the windowing library in older Linux distributions.-- Create another desktop session, manually clear the cache for the windowing library, and then restart. - ### Hyper-V doesn't start with error `The virtual machine is using processor-specific xsave features not supported` - This error can happen when a lab user leaves the Hyper-V VM in the saved state. You can right-select the VM in Hyper-V Manager and select **Delete saved state**. |
load-balancer | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md | You can also find the latest Azure Load Balancer updates and subscribe to the RS | Type |Name |Description |Date added | | ||||+| Feature | [Inbound ICMPv6 pings and traceroute are now supported on Azure Load Balancer (General Availability)](https://azure.microsoft.com/updates/general-availability-inbound-icmpv6-pings-and-traceroute-are-now-supported-on-azure-load-balancer/) | Azure Load Balancer now supports ICMPv6 pings to its frontend and inbound traceroute support to both IPv4 and IPv6 frontends. Learn more about [how to test reachability of your load balancer](load-balancer-test-frontend-reachability.md). | June 2023 | | Feature | [Inbound ICMPv4 pings are now supported on Azure Load Balancer (General Availability)](https://azure.microsoft.com/updates/general-availability-inbound-icmpv4-pings-are-now-supported-on-azure-load-balancer/) | Azure Load Balancer now supports ICMPv4 pings to its frontend, enabling the ability to test reachability of your load balancer. Learn more about [how to test reachability of your load balancer](load-balancer-test-frontend-reachability.md). | May 2023 | | SKU | [Basic Load Balancer is retiring on September 30, 2025](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/) | Basic Load Balancer will retire on 30 September 2025. Make sure to [migrate to Standard SKU](load-balancer-basic-upgrade-guidance.md) before this date. | September 2022 | | SKU | [Gateway Load Balancer now generally available](https://azure.microsoft.com/updates/generally-available-azure-gateway-load-balancer/) | Gateway Load Balancer is a new SKU of Azure Load Balancer targeted for scenarios requiring transparent NVA (network virtual appliance) insertion. Learn more about [Gateway Load Balancer](gateway-overview.md) or our supported [third party partners](gateway-partners.md). | July 2022 | |
machine-learning | How To Troubleshoot Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md | This is a list of reasons you might run into this error when using either manage * [Subscription does not exist](#subscription-does-not-exist) * [Startup task failed due to authorization error](#authorization-error) * [Startup task failed due to incorrect role assignments on resource](#authorization-error)+* [Invalid template function specification](#invalid-template-function-specification) * [Unable to download user container image](#unable-to-download-user-container-image) * [Unable to download user model](#unable-to-download-user-model) To do these, Azure uses [managed identities](../active-directory/managed-identit For more information, please see [Container Registry Authorization Error](#container-registry-authorization-error). +#### Invalid template function specification ++This error occurs when a template function has been specified incorrectly. Please either fix the policy or remove the policy assignment to unblock. The error message may include the policy assignment name and the policy definition to help you debug this error, as well as the [Azure policy definition structure article](https://aka.ms/policy-avoiding-template-failures) which discusses tips to avoid template failures. + #### Unable to download user container image It's possible that the user container couldn't be found. Check [container logs](#get-container-logs) to get more details. |
mysql | Concepts Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md | The following are unsupported: - [REPLICATION_APPLIER](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_replication-applier) - [ROLE_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_role-admin) - [SESSION_VARIABLES_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin)- - [SET_USER_ID](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_set-user-id) + - [SHOW ROUTINE](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-routine) - [XA_RECOVER_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_xa-recover-admin) ## Functional limitations |
mysql | Concepts Service Tiers Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md | The detailed specifications of the available server types are as follows: |Standard_E32ads_v5 | 32 | 256 | 38000 | 43691 | 1200 | |Standard_E48ds_v4 | 48 | 384 | 48000 | 65536 | 1800 | |Standard_E48ads_v5 | 48 | 384 | 48000 | 65536 | 1800 |-|Standard_E64ds_v4 | 64 | 504 | 48000 | 86016 | 2400 | -|Standard_E64ads_v5 | 64 | 504 | 48000 | 86016 | 2400 | -|Standard_E80ids_v4 | 80 | 504 | 48000 | 86016 | 2400 | +|Standard_E64ds_v4 | 64 | 504 | 64000 | 86016 | 2400 | +|Standard_E64ads_v5 | 64 | 504 | 64000 | 86016 | 2400 | +|Standard_E80ids_v4 | 80 | 504 | 72000 | 86016 | 2400 | |Standard_E2ds_v5 | 2 | 16 | 5000 | 2731 | 75 | |Standard_E4ds_v5 | 4 | 32 | 10000 | 5461 | 150 | |Standard_E8ds_v5 | 8 | 64 | 18000 | 10923 | 300 | Azure Database for MySQL ΓÇô Flexible Server supports the provisioning of additi The minimum IOPS are 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size refer to the [table](#service-tiers-size-and-server-types). -The maximum IOPS are dependent on the maximum available IOPS per compute size. Refer to the column *Max uncached disk throughput: IOPS/MBps* in the [B-series](../../virtual-machines/sizes-b-series-burstable.md), [Ddsv4-series](../../virtual-machines/ddv4-ddsv4-series.md), and [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)/ [Edsv5-series](../../virtual-machines/edv5-edsv5-series.md)] documentation. - > [!Important] > **Complimentary IOPS** are equal to MINIMUM("Max uncached disk throughput: IOPS/MBps" of compute size, 300 + storage provisioned in GiB * 3)<br> > **Minimum IOPS are 360 across all compute sizes<br> |
sap | High Availability Guide Rhel Glusterfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-glusterfs.md | +Be aware that as documented in [Red Hat Gluster Storage Life Cycle](https://access.redhat.com/support/policy/updates/rhs) Red Hat Gluster Storage will reach end of life at the end of 2024. The configuration will be supported for SAP on Azure until it reaches end of life stage. GlusterFS should not be used for new deployments. We recommend to deploy the SAP shared directories on NFS on Azure Files or Azure NetApp Files volumes as documented in [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md) or [HA for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md). + Read the following SAP Notes and papers first * SAP Note [1928533], which has: Read the following SAP Notes and papers first ## Overview -To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster and can be used by multiple SAP systems. Be aware that Red Hat is phasing out Red Hat Gluster Storage. The configuration will be supported for SAP on Azure until it reaches end of life stage as defined in [Red Hat Gluster Storage Life Cycle](https://access.redhat.com/support/policy/updates/rhs). +To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster and can be used by multiple SAP systems.  |
sap | Integration Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md | -[According to SAP over 87% of total global commerce is generated by SAP customers](https://www.sap.com/documents/2017/04/4666ecdd-b67c-0010-82c7-eda71af511fa.html) and more SAP systems are running in the cloud each year. The SAP platform provides a foundation for innovation for many companies and can handle various workloads natively. Explore our integration section further to learn how you can combine the Microsoft Azure ecosystem with your SAP workload to accelerate your business outcomes. Among the scenarios are extensions with Power Platform ("keep the ABAP core clean"), secured APIs with Azure API Management, automated business processes with Logic Apps, enriched experiences with SAP Business Technology Platform, uniform data blending dashboards with the Azure Data Platform and more. +[According to SAP over 87% of total global commerce is generated by SAP customers](https://www.sap.com/documents/2017/04/4666ecdd-b67c-0010-82c7-eda71af511fa.html) and more SAP systems are running in the cloud each year. The SAP platform provides a foundation for innovation for many companies and can handle various workloads natively. Explore our integration section further to learn how you can combine the Microsoft Azure ecosystem with your SAP workload to accelerate your business outcomes. Among the scenarios are extensions with Power Platform ("keep the ABAP core clean"), secured APIs with Azure API Management, automated business processes with Logic Apps, enriched experiences with SAP Business Technology Platform, native Microsoft integrations using ABAP Cloud, uniform data blending dashboards with the Azure Data Platform and more. For the latest news from the SAP and Azure world, follow the [SAP on Microsoft TechCommunity section](https://techcommunity.microsoft.com/t5/sap-on-microsoft/ct-p/SAPonMicrosoft) and the relevant Azure tags on the [SAP Community](https://community.sap.com/search/?ct=blog&q=Azure). Select an area for resources about how to integrate SAP and Azure in that space. | [SAP Fiori](#sap-fiori) | Increase performance and security of your SAP Fiori applications by integrating them with Azure services. | | [Azure Active Directory (Azure AD)](#azure-ad) | Ensure end-to-end SAP user authentication and authorization with Azure Active Directory. Single sign-on (SSO) and multi-factor authentication (MFA) are the foundation for a secure and seamless user experience. | | [Azure Integration Services](#azure-integration-services) | Connect your SAP workloads with your end users, business partners, and their systems with world-class integration services. Learn about co-development efforts that enable SAP Event Mesh to exchange cloud events with Azure Event Grid, understand how you can achieve high-availability for services like SAP Cloud Integration, automate your SAP invoice processing with Logic Apps and Azure Cognitive Services and more. |-| [App Development and DevOps](#app-development-and-devops) | Apply best-in-class developer tooling to your SAP app developments and DevOps processes. | +| [App Development in any language including ABAP and DevOps](#app-development-in-any-language-including-abap-and-devops) | Apply best-in-class developer tooling to your SAP app developments and DevOps processes. | | [Azure Data Services](#azure-data-services) | Learn how to integrate your SAP data with Data Services like Azure Synapse Analytics, Azure Data Lake Storage, Azure Data Factory, Power BI, Data Warehouse Cloud, Analytics Cloud, which connector to choose, tune performance, efficiently troubleshoot, and more. | | [Threat Monitoring with Microsoft Sentinel for SAP](#microsoft-sentinel) | Learn how to best secure your SAP workload with Microsoft Sentinel, prevent incidents from happening and detect and respond to threats in real-time with this [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution. | | [SAP Business Technology Platform (BTP)](#sap-btp) | Discover integration scenarios like SAP Private Link to securely and efficiently connect your BTP apps to your Azure workloads. | Also see the following SAP resources: - [Achieve high availability for SAP Cloud Integration (part of SAP Integration Suite) on Azure](https://blogs.sap.com/2021/09/23/black-friday-will-take-your-cpi-instance-offline-unless/) - [Automate SAP invoice processing using Azure Logic Apps and Cognitive Services](https://blogs.sap.com/2021/02/03/your-sap-on-azure-part-26-automate-invoice-processing-using-azure-logic-apps-and-cognitive-services/) -### App development and DevOps +### App development in any language including ABAP and DevOps For more information about integrating SAP with Microsoft services natively, see the following resources: For more information about integrating SAP with Microsoft services natively, see - [Use community-driven OData SDKs with Azure Functions](https://github.com/Azure/azure-sdk-for-sap-odata) Also see the following SAP resources: +- [SAP BTP ABAP Environment (aka. Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/) +- [SAP S/4HANA Cloud, private edition ΓÇô ABAP Environment (aka. Embedded Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/) - [dotNET speaks OData too, how to implement Azure App Service with SAP Gateway](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/) - [Apply cloud native deployment practice blue-green to SAP BTP apps with Azure DevOps](https://blogs.sap.com/2019/12/20/go-blue-green-for-your-cloud-foundry-app-from-webide-with-azure-devops/) Also see the following SAP resources: - [How to use Microsoft Sentinel's SOAR capabilities with SAP](https://blogs.sap.com/2023/05/22/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-blog-series/) - [Deploy SAP user blocking based on suspicious activity on the SAP backend](https://blogs.sap.com/2023/05/22/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-youre-gonna-hear-me-soar-part-1/)+- [Automatically trigger re-activation of the SAP audit log on malicious deactivation](https://blogs.sap.com/2023/05/23/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-part-3/) ++See below video to experience the SAP security orchestration, automation and response workflow with Sentinel in action: ++> [!VIDEO https://www.youtube.com/embed/b-AZnR-nQpg] ### SAP BTP |
site-recovery | Azure To Azure How To Enable Zone To Zone Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md | This article describes how to replicate, failover, and failback Azure virtual ma > [!NOTE] >-> - Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, East Asia, Japan East, Korea Central, Australia East, India Central, China North 3, UK South, West Europe, North Europe, Germany West Central, Norway East, France Central, Switzerland North, Sweden Central (Managed Access), South Africa North, Canada Central, US Gov Virginia, Central US, South Central US, East US, East US 2, West US 2, Brazil South, West US 3 and UAE North. +> - Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, East Asia, Japan East, Korea Central, Australia East, India Central, China North 3, Qatar Central, UK South, West Europe, North Europe, Germany West Central, Norway East, France Central, Switzerland North, Sweden Central (Managed Access), South Africa North, Canada Central, US Gov Virginia, Central US, South Central US, East US, East US 2, West US 2, Brazil South, West US 3 and UAE North. > >-> - Site Recovery does not move or store customer data out of the region in which it is deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data. +> - Site Recovery does not move or store customer data out of the region in which it's deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data. > > > - Zone to Zone disaster recovery is not supported for VMs having ZRS managed disks. If you want to move VMs to an availability zone in a different region, [review t ## Using Availability Zones for Disaster Recovery -Typically, Availability Zones are used to deploy VMs in a High Availability configuration. They may be too close to each other to serve as a Disaster Recovery solution in case of natural disaster. +Typically, Availability Zones are used to deploy VMs in a High Availability configuration. They may be too close to each other to serve as a Disaster Recovery solution in natural disaster. However, in some scenarios, Availability Zones can be leveraged for Disaster Recovery: -- Many customers who had a metro Disaster Recovery strategy while hosting applications on-premises sometimes look to mimic this strategy once they migrate applications over to Azure. These customers acknowledge the fact that metro Disaster Recovery strategy may not work in case of a large-scale physical disaster and accept this risk. For such customers, Zone to Zone Disaster Recovery can be used as a Disaster Recovery option.-- Many other customers have complicated networking infrastructure and do not wish to recreate it in a secondary region due to the associated cost and complexity. Zone to Zone Disaster Recovery reduces complexity as it leverages redundant networking concepts across Availability Zones making configuration much simpler. Such customers prefer simplicity and can also use Availability Zones for Disaster Recovery.-- In some regions that do not have a paired region within the same legal jurisdiction (for example, Southeast Asia), Zone to Zone Disaster Recovery can serve as the de-facto Disaster Recovery solution as it helps ensure legal compliance, since your applications and data do not move across national/regional boundaries. +- Many customers who had a metro Disaster Recovery strategy while hosting applications on-premises sometimes look to mimic this strategy once they migrate applications over to Azure. These customers acknowledge the fact that metro Disaster Recovery strategy may not work in a large-scale physical disaster and accept this risk. For such customers, Zone to Zone Disaster Recovery can be used as a Disaster Recovery option. +- Many other customers have complicated networking infrastructure and don't wish to recreate it in a secondary region due to the associated cost and complexity. Zone to Zone Disaster Recovery reduces complexity as it leverages redundant networking concepts across Availability Zones making configuration simpler. Such customers prefer simplicity and can also use Availability Zones for Disaster Recovery. +- In some regions that don't have a paired region within the same legal jurisdiction (for example, Southeast Asia), Zone to Zone Disaster Recovery can serve as the de-facto Disaster Recovery solution as it helps ensure legal compliance, since your applications and data don't move across national boundaries. + - Zone to Zone Disaster Recovery implies replication of data across shorter distances when compared with Azure to Azure Disaster Recovery and therefore, you may see lower latency and consequently lower RPO. -While these are strong advantages, there is a possibility that Zone to Zone Disaster Recovery may fall short of resilience requirements in the event of a region-wide natural disaster. +While these are strong advantages, there's a possibility that Zone to Zone Disaster Recovery may fall short of resilience requirements in the event of a region-wide natural disaster. ## Networking for Zone to Zone Disaster Recovery -As mentioned above, Zone to Zone Disaster Recovery reduces complexity as it leverages redundant networking concepts across Availability Zones making configuration much simpler. The behavior of networking components in the Zone to Zone Disaster Recovery scenario is outlined below: +As mentioned before, Zone to Zone Disaster Recovery reduces complexity as it leverages redundant networking concepts across Availability Zones making configuration simpler. The behavior of networking components in the Zone to Zone Disaster Recovery scenario is outlined below: - Virtual Network: You may use the same virtual network as the source network for actual failovers. Use a different virtual network to the source virtual network for test failovers. - Subnet: Failover into the same subnet is supported.-- Private IP address: If you are using static IP addresses, you can use the same IPs in the target zone if you choose to configure them in such a manner.+- Private IP address: If you're using static IP addresses, you can use the same IPs in the target zone if you choose to configure them in such a manner. - Accelerated Networking: Similar to Azure to Azure Disaster Recovery, you may enable Accelerated Networking if the VM SKU supports it.-- Public IP address: You can attach a previously created standard public IP address in the same region to the target VM. Basic public IP addresses do not support Availability Zone related scenarios.-- Load balancer: Standard load balancer is a regional resource and therefore the target VM can be attached to the backend pool of the same load balancer. A new load balancer is not required.+- Public IP address: You can attach a previously created standard public IP address in the same region to the target VM. Basic public IP addresses don't support Availability Zone related scenarios. +- Load balancer: Standard load balancer is a regional resource and therefore the target VM can be attached to the backend pool of the same load balancer. A new load balancer isn't required. - Network Security Group: You may use the same network security groups as applied to the source VM. ## Pre-requisites Log in to the Azure portal. Pricing for Zone to Zone Disaster Recovery is identical to the pricing of Azure to Azure Disaster Recovery. You can find more details on the pricing page [here](https://azure.microsoft.com/pricing/details/site-recovery/) and [here](https://azure.microsoft.com/blog/know-exactly-how-much-it-will-cost-for-enabling-dr-to-your-azure-vm/). Note that the egress charges that you would see in zone to zone disaster recovery would be lower than region to region disaster recovery. For data transfer charges between Availability Zones, check [here](https://azure.microsoft.com/pricing/details/bandwidth/). **2. What is the SLA for RTO and RPO?**-The RTO SLA is the same as that for Site Recovery overall. We promise RTO of up to 2 hours. There is no defined SLA for RPO. +The RTO SLA is the same as that for Site Recovery overall. We promise RTO of up to 2 hours. There's no defined SLA for RPO. **3. Is capacity guaranteed in the secondary zone?**-The Site Recovery team and Azure capacity management team plan for sufficient infrastructure capacity. When you start a failover, the teams also help ensure VM instances that are protected by Site Recovery will deploy to the target zone. Check [here](./azure-to-azure-common-questions.md#capacity) for more FAQs on Capacity. +The Site Recovery team and Azure capacity management team plan for sufficient infrastructure capacity. When you start a failover, the teams also help ensure VM instances that are protected by Site Recovery deploys to the target zone. Check [here](./azure-to-azure-common-questions.md#capacity) for more FAQs on Capacity. **4. Which operating systems are supported?** Zone to Zone Disaster Recovery supports the same operating systems as Azure to Azure Disaster Recovery. Refer to the support matrix [here](./azure-to-azure-support-matrix.md). No, you must fail over to a different resource group. The steps that need to be followed to run a Disaster Recovery drill, fail over, re-protect, and failback are the same as the steps in Azure to Azure Disaster Recovery scenario. -To perform a Disaster Recovery drill, please follow the steps outlined [here](./azure-to-azure-tutorial-dr-drill.md). +To perform a Disaster Recovery drill, follow the steps outlined [here](./azure-to-azure-tutorial-dr-drill.md). To perform a failover and reprotect VMs in the secondary zone, follow the steps outlined [here](./azure-to-azure-tutorial-failover-failback.md). |
site-recovery | Azure To Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md | Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 16.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 16.04 LTS kernels supported in this release. | |||-18.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 5.4.0-1107-azure <br> 5.4.0-147-generic <br> 5.4.0-147-generic <br> 5.4.0-148-generic | +18.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 5.4.0-1107-azure <br> 5.4.0-147-generic <br> 5.4.0-147-generic <br> 5.4.0-148-generic <br> 4.15.0-212-generic <br> 4.15.0-1166-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure | 18.04 LTS |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.4.0-137-generic <br> 5.4.0-1101-azure <br> 4.15.0-1161-azure <br> 4.15.0-204-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 4.15.0-206-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic <br> 4.15.0-1162-azure | 18.04 LTS |[9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)| 4.15.0-196-generic <br> 4.15.0-1157-azure <br> 5.4.0-1098-azure <br> 4.15.0-1158-azure <br> 4.15.0-1159-azure <br> 4.15.0-201-generic <br> 4.15.0-202-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic | 18.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic</br>4.15.0-1153-azure </br>4.15.0-194-generic </br>5.4.0-1094-azure </br>5.4.0-128-generic </br>5.4.0-131-generic | 18.04 LTS |[9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-191-generic </br> 4.15.0-192-generic </br>5.4.0-1089-azure </br>5.4.0-1090-azure </br>5.4.0-124-generic| |||-20.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 5.4.0-147-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.4.0-1107-azure <br> 5.4.0-148-generic | +20.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 5.4.0-147-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.4.0-1107-azure <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.15.0-73-generic <br> 5.15.0-1039-azure | 20.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.4.0-1101-azure <br> 5.15.0-1033-azure <br> 5.15.0-60-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 5.15.0-1034-azure <br> 5.15.0-67-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic | 20.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 5.4.0-1095-azure <br> 5.15.0-1023-azure <br> 5.4.0-1098-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic <br> 5.4.0-137-generic | 20.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br> 5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-22-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-40-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-51-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic </br> 5.15.0-1021-azure </br> 5.15.0-1022-azure </br> 5.15.0-50-generic </br> 5.15.0-52-generic </br> 5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic | 20.04 LTS |[9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic </br> 5.4.0-124-generic </br> 5.4.0-125-generic | |||-22.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.15.0-70-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-71-generic <br> 5.15.0-72-generic | +22.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.15.0-70-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-1039-azure | 22.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.15.0-1003-azure <br> 5.15.0-1005-azure <br> 5.15.0-1007-azure <br> 5.15.0-1008-azure <br> 5.15.0-1010-azure <br> 5.15.0-1012-azure <br> 5.15.0-1013-azure <br> 5.15.0-1014-azure <br> 5.15.0-1017-azure <br> 5.15.0-1019-azure <br> 5.15.0-1020-azure <br> 5.15.0-1021-azure <br> 5.15.0-1022-azure <br> 5.15.0-1023-azure <br> 5.15.0-1024-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-25-generic <br> 5.15.0-27-generic <br> 5.15.0-30-generic <br> 5.15.0-33-generic <br> 5.15.0-35-generic <br> 5.15.0-37-generic <br> 5.15.0-39-generic <br> 5.15.0-40-generic <br> 5.15.0-41-generic <br> 5.15.0-43-generic <br> 5.15.0-46-generic <br> 5.15.0-47-generic <br> 5.15.0-48-generic <br> 5.15.0-50-generic <br> 5.15.0-52-generic <br> 5.15.0-53-generic <br> 5.15.0-56-generic <br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.15.0-1033-azure <br> 5.15.0-60-generic <br> 5.15.0-1034-azure <br> 5.15.0-67-generic | > [!NOTE] Debian 9.1 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azu Debian 9.1 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 9.1 kernels supported in this release. | |||-Debian 10 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-0.bpo.3-amd64 <br> 5.10.0-0.bpo.3-cloud-amd64 <br> 5.10.0-0.bpo.4-amd64 <br> 5.10.0-0.bpo.4-cloud-amd64 <br> 5.10.0-0.bpo.5-amd64 <br> 5.10.0-0.bpo.5-cloud-amd64 <br> 4.19.0-24-amd64 <br> 4.19.0-24-cloud-amd64 <br> 5.10.0-0.deb10.22-amd64 <br> 5.10.0-0.deb10.22-cloud-amd64" | +Debian 10 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-0.bpo.3-amd64 <br> 5.10.0-0.bpo.3-cloud-amd64 <br> 5.10.0-0.bpo.4-amd64 <br> 5.10.0-0.bpo.4-cloud-amd64 <br> 5.10.0-0.bpo.5-amd64 <br> 5.10.0-0.bpo.5-cloud-amd64 <br> 4.19.0-24-amd64 <br> 4.19.0-24-cloud-amd64 <br> 5.10.0-0.deb10.22-amd64 <br> 5.10.0-0.deb10.22-cloud-amd64 <br> 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 | Debian 10 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.10.0-0.deb10.21-amd64 <br> 5.10.0-0.deb10.21-cloud-amd64 | Debian 10 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 4.19.0-23-amd64 <br> 4.19.0-23-cloud-amd64 <br> 5.10.0-0.deb10.20-amd64 <br> 5.10.0-0.deb10.20-cloud-amd64 | Debian 10 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 4.19.0-22-amd64 </br> 4.19.0-22-cloud-amd64 </br> 5.10.0-0.deb10.19-amd64 </br> 5.10.0-0.deb10.19-cloud-amd64 | |
storage | Lifecycle Management Policy Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md | To enable last access time tracking with the Azure portal, follow these steps: 1. Navigate to your storage account in the Azure portal. 1. In the **Data management** section, select **Lifecycle management**.+1. Check the checkbox "Enable access tracking" > [!div class="mx-imgBorder"] >  |
storage | Volume Snapshot Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/volume-snapshot-restore.md | + + Title: Use volume snapshots with Azure Container Storage Preview +description: Take a point-in-time snapshot of a persistent volume and restore it. You'll create a volume snapshot class, take a snapshot, create a restored persistent volume claim, and deploy a new pod. +++ Last updated : 07/03/2023+++++# Use volume snapshots with Azure Container Storage Preview +[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to take a point-in-time snapshot of a persistent volume and restore it with a new persistent volume claim. ++## Prerequisites ++- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. +- You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three virtual machines (VMs) for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs). +- Follow the instructions in [Install Azure Container Storage](container-storage-aks-quickstart.md) to assign [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role to the AKS managed identity and install Azure Container Storage Preview. +- This article assumes you've already created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN Preview doesn't support volume snapshots. ++## Create a volume snapshot class ++First, create a volume snapshot class, which allows you to specify the attributes of the volume snapshot, by defining it in a YAML manifest file. Follow these steps to create a volume snapshot class for Azure Disks. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-volumesnapshotclass.yaml`. ++1. Paste in the following code. The volume snapshot class **name** value can be whatever you want. ++ ```yml + apiVersion: snapshot.storage.k8s.io/v1 + kind: VolumeSnapshotClass + metadata: + name: csi-acstor-vsc + driver: containerstorage.csi.azure.com + deletionPolicy: Delete + parameters: + incremental: "true" # available values: "true", "false" ("true" by default for Azure Public Cloud, and "false" by default for Azure Stack Cloud) + ``` ++1. Apply the YAML manifest file to create the volume snapshot class. + + ```azurecli-interactive + kubectl apply -f acstor-volumesnapshotclass.yaml + ``` + + When creation is complete, you'll see a message like: + + ```output + volumesnapshotclass.snapshot.storage.k8s.io/csi-acstor-vsc created + ``` + + You can also run `kubectl get volumesnapshotclass` to check that the volume snapshot class has been created. You should see output such as: + + ```output + NAME DRIVER DELETIONPOLICY AGE + csi-acstor-vsc containerstorage.csi.azure.com Delete 11s + ``` + +## Create a volume snapshot ++Next, you'll create a snapshot of an existing persistent volume claim and apply the volume snapshot class you created in the previous step. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-volumesnapshot.yaml`. ++1. Paste in the following code. The `volumeSnapshotClassName` should be the name of the volume snapshot class that you created in the previous step. For `persistentVolumeClaimName`, use the name of the persistent volume claim that you want to take a snapshot of. The volume snapshot **name** value can be whatever you want. ++ ```yml + apiVersion: snapshot.storage.k8s.io/v1 + kind: VolumeSnapshot + metadata: + name: azuredisk-volume-snapshot + spec: + volumeSnapshotClassName: csi-acstor-vsc + source: + persistentVolumeClaimName: azurediskpvc + ``` ++1. Apply the YAML manifest file to create the volume snapshot. + + ```azurecli-interactive + kubectl apply -f acstor-volumesnapshot.yaml + ``` + + When creation is complete, you'll see a message like: + + ```output + volumesnapshot.snapshot.storage.k8s.io/azuredisk-volume-snapshot created + ``` + + You can also run `kubectl get volumesnapshot` to check that the volume snapshot has been created. If `READYTOUSE` indicates *true*, you can move on to the next step. ++## Create a restored persistent volume claim ++Now you can create a new persistent volume claim that uses the volume snapshot as a data source. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc-restored.yaml`. ++1. Paste in the following code. The `storageClassName` must match the storage class that you used when creating the original persistent volume. For example, if you're using ephemeral disk (local NVMe) instead of Azure Disks for back-end storage, change `storageClassName` to `acstor-ephemeraldisk`. For the data source **name** value, use the name of the volume snapshot that you created in the previous step. The metadata **name** value for the persistent volume claim can be whatever you want. ++ ```yml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-azuredisk-snapshot-restored + spec: + accessModes: + - ReadWriteOnce + storageClassName: acstor-azuredisk + resources: + requests: + storage: 100Gi + dataSource: + name: azuredisk-volume-snapshot + kind: VolumeSnapshot + apiGroup: snapshot.storage.k8s.io + ``` ++1. Apply the YAML manifest file to create the PVC. + + ```azurecli-interactive + kubectl apply -f acstor-pvc-restored.yaml + ``` + + When creation is complete, you'll see a message like: + + ```output + persistentvolumeclaim/pvc-azuredisk-snapshot-restored created + ``` + + You can also run `kubectl describe pvc pvc-azuredisk-snapshot-restored` to check that the persistent volume has been created. You should status **Pending** and the message **waiting for first consumer to be created before binding**. ++> [!TIP] +> If you already created a restored persistent volume claim and want to apply the yaml file again to correct an error or make a change, you'll need to first delete the old persistent volume claim before applying the yaml file again: `kubectl delete pvc <pvc-name>`. ++## Delete the original pod ++Before you create a new pod, you'll need to delete the original pod that you created the snapshot from. ++1. Run `kubectl get pods` to list the pods. Make sure you're deleting the right one. +1. To delete the pod, run `kubectl delete pod <pod-name>`. ++## Create a new pod using the restored snapshot ++Once you've deleted the original pod, you can create a new pod using the restored persistent volume claim. Create the pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod2.yaml`. ++1. Paste in the following code. The persistent volume claim `claimName` should be the name of the restored snapshot persistent volume claim that you created. The metadata **name** value for the pod can be whatever you want. ++ ```yml + kind: Pod + apiVersion: v1 + metadata: + name: fiopod2 + spec: + nodeSelector: + acstor.azure.com/io-engine: acstor + volumes: + - name: diskpv + persistentVolumeClaim: + claimName: pvc-azuredisk-snapshot-restored + containers: + - name: fio + image: nixery.dev/shell/fio + args: + - sleep + - "1000000" + volumeMounts: + - mountPath: "/volume" + name: diskpv + ``` ++1. Apply the YAML manifest file to deploy the pod. + + ```azurecli-interactive + kubectl apply -f acstor-pod2.yaml + ``` + + You should see output similar to the following: + + ```output + pod/fiopod2 created + ``` ++1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod: ++ ```azurecli-interactive + kubectl describe pod fiopod2 + kubectl describe pvc pvc-azuredisk-snapshot-restored + ``` ++1. Check fio testing to see its current status: ++ ```azurecli-interactive + kubectl exec -it fiopod2 -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60 + ``` ++You've now deployed a new pod from the restored persistent volume claim, and you can use it for your Kubernetes workloads. ++## See also ++- [What is Azure Container Storage?](container-storage-introduction.md) |
stream-analytics | Cluster Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cluster-overview.md | Last updated 05/10/2022 Azure Stream Analytics Cluster offers a single-tenant deployment for complex and demanding streaming scenarios. At full scale, Stream Analytics clusters can process more than 400 MB/second in real time. Stream Analytics jobs running on dedicated clusters can leverage all the features in the Standard offering and includes support for private link connectivity to your inputs and outputs. -Stream Analytics clusters are billed by Streaming Units (SUs) which represent the amount of CPU and memory resources allocated to your cluster. A Streaming Unit is the same across Standard and Dedicated offerings. You can purchase from 36 to 396 SUs for each cluster, by increments of 36 (36, 72, 108...). A Stream Analytics cluster can serve as the streaming platform for your organization and can be shared by different teams working on various use cases. +Stream Analytics clusters are billed by Streaming Units (SUs) which represent the amount of CPU and memory resources allocated to your cluster. A Streaming Unit is the same across Standard and Dedicated offerings and Azure Stream Analytics supports two streaming unit structures: SU V1(to be deprecated) and SU V2(recommended) [learn more](./stream-analytics-streaming-unit-consumption.md). ++When you create a cluster on the portal, a **Dedicated V2** cluster is created by default. Dedicated V2 clusters support 12 to 66 SU V2s, and can be scaled in increments of 12 (12, 24, 48...). Dedicated V1 clusters are ASA's original offering and still supported, they require a minimum of 36 SUs. ++The underlying compute power for V1 and V2 streaming units is as follows: ++ ++For more information on dedicated cluster offerings and pricing, visit the [Azure Stream Analytics Pricing Page](https://azure.microsoft.com/pricing/details/stream-analytics/). ++> [!Note] +> Jobs in a dedicated cluster created with SU V2 capacity can only support jobs with SU V2. Meaning, you cannot run both V1 and V2 SUs in a dedicated cluster. Mix and match is not supported due to capacity complications. ++A Stream Analytics cluster can serve as the streaming platform for your organization and can be shared by different teams working on various use cases. ++> [!Note] +> Azure Stream Analytics also supports Virtual Network Integration available in Public Preview. VNET integration permits network isolation which is accomplished by deploying dedicated instances of Azure Stream Analytics into your virtual network. A minimum of 6 SU V2s is required for VNET jobs [learn more](./run-job-in-virtual-network.md). ## What are Stream Analytics clusters Stream Analytics clusters are powered by the same engine that powers Stream Anal * Single tenant hosting with no noise from other tenants. Your resources are truly "isolated" and perform better when there are burst in traffic. -* Scale your cluster between 36 to 396 SUs as your streaming usage increases over time. +* Scale your cluster between 12 to 66 SU V2s as your streaming usage increases over time. * VNet support that allows your Stream Analytics jobs to connect to other resources securely using private endpoints. |
update-center | Prerequsite For Schedule Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/prerequsite-for-schedule-patching.md | Additionally, in some instances, when you remove the schedule from a VM, there i > [!IMPORTANT] > For a continued scheduled patching experience, you must ensure that the new VM property, *BypassPlatformSafetyChecksOnUserSchedule*, is enabled on all your Azure VMs (existing or new) that have schedules attached to them by **30th June 2023**. This setting will ensure machines are patched using your configured schedules and not auto patched. Failing to enable by **30th June 2023** will give an error that the prerequisites aren't met. +## Schedule patching in an availability set ++1. All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. +1. VMs in a common availability set are updated within Update Domain boundaries and, VMs across multiple Update Domains aren't updated concurrently. + ## Find VMs with associated schedules To identify the list of VMs with the associated schedules for which you have to enable new VM property, follow these steps: |
update-center | Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md | Update management center (preview) uses maintenance control schedule instead of > [!Note] > If you set the patch mode to Azure orchestrated (AutomaticByPlatform) but do not enable the **BypassPlatformSafetyChecksOnUserSchedule** flag and do not attach a maintenance configuration to an Azure machine, it is treated as [Automatic Guest patching](../virtual-machines/automatic-vm-guest-patching.md) enabled machine and Azure platform will automatically install updates as per its own schedule. [Learn more](./overview.md#prerequisites). +## Schedule patching in an availability set ++1. All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. +1. VMs in a common availability set are updated within Update Domain boundaries and, VMs across multiple Update Domains aren't updated concurrently. ## Schedule recurring updates on single VM |
update-center | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md | Azure machine has the patch orchestration option as AutomaticByOS/Windows automa ### Resolution -If you don't want any patch installation to be orchestrated by Azure or aren't using custom patching solutions, you must change the patch orchestration option to **Customer Managed Schedules (Preview)** and don't associate a schedule/maintenance configuration to the machine. This will ensure that no patching is performed on the machine until you change it explicitly. For more information, see [User scenarios](prerequsite-for-schedule-patching.md#user-scenarios). +If you don't want any patch installation to be orchestrated by Azure or aren't using custom patching solutions, you must change the patch orchestration option to **Customer Managed Schedules (Preview)** and don't associate a schedule/maintenance configuration to the machine. This will ensure that no patching is performed on the machine until you change it explicitly. For more information, see **scenario 2** in [User scenarios](prerequsite-for-schedule-patching.md#user-scenarios). :::image type="content" source="./media/troubleshoot/known-issue-update-settings-failed.png" alt-text="Screenshot that shows a notification of failed update settings."::: |
virtual-network | Virtual Network Service Endpoints Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md | Once you configure service endpoints to a specific service, validate that the se - Indicates that a more direct connection to the service is in effect compared to any forced-tunneling routes >[!NOTE]-> Service endpoint routes override any BGP or UDR routes for the address prefix match of an Azure service. For more information, see [troubleshooting with effective routes](diagnose-network-routing-problem.md). +> Service endpoint routes override any BGP routes for the address prefix match of an Azure service. For more information, see [troubleshooting with effective routes](diagnose-network-routing-problem.md). ## Provisioning |
web-application-firewall | Waf Sensitive Data Protection Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-sensitive-data-protection-configure.md | Last updated 06/13/2023 # How to mask sensitive data on Azure Web Application Firewall -The Web Application Firewall's (WAF's) Log Scrubbing tool helps you remove sensitive data from your WAF logs. It works by using a rules engine that allows you to build custom rules to identify specific portions of a request that contain sensitive data. Once identified, the tool scrubs that information from your logs and replaces it with _*******_. +The Web Application Firewall's (WAF's) Log Scrubbing tool, now in preview, helps you remove sensitive data from your WAF logs. It works by using a rules engine that allows you to build custom rules to identify specific portions of a request that contain sensitive data. Once identified, the tool scrubs that information from your logs and replaces it with _*******_. The following table shows examples of log scrubbing rules that can be used to protect your sensitive data: |
web-application-firewall | Waf Sensitive Data Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-sensitive-data-protection.md | Last updated 06/13/2023 # What is Azure Web Application Firewall Sensitive Data Protection? -The Web Application Firewall's (WAF's) Log Scrubbing tool helps you remove sensitive data from your WAF logs. It works by using a rules engine that allows you to build custom rules to identify specific portions of a request that contain sensitive information. Once identified, the tool scrubs that information from your logs and replaces it with _*******_. +The Web Application Firewall's (WAF's) Log Scrubbing tool, now in preview, helps you remove sensitive data from your WAF logs. It works by using a rules engine that allows you to build custom rules to identify specific portions of a request that contain sensitive information. Once identified, the tool scrubs that information from your logs and replaces it with _*******_. ## Default log behavior |